00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 978 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3645 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.128 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.129 The recommended git tool is: git 00:00:00.129 using credential 00000000-0000-0000-0000-000000000002 00:00:00.131 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.161 Fetching changes from the remote Git repository 00:00:00.163 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.201 Using shallow fetch with depth 1 00:00:00.201 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.201 > git --version # timeout=10 00:00:00.223 > git --version # 'git version 2.39.2' 00:00:00.223 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.251 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.251 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.902 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.915 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.926 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.927 > git config core.sparsecheckout # timeout=10 00:00:08.939 > git read-tree -mu HEAD # timeout=10 00:00:08.954 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.974 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.974 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:09.059 [Pipeline] Start of Pipeline 00:00:09.073 [Pipeline] library 00:00:09.074 Loading library shm_lib@master 00:00:09.075 Library shm_lib@master is cached. Copying from home. 00:00:09.085 [Pipeline] node 00:00:09.092 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:09.093 [Pipeline] { 00:00:09.102 [Pipeline] catchError 00:00:09.104 [Pipeline] { 00:00:09.118 [Pipeline] wrap 00:00:09.128 [Pipeline] { 00:00:09.136 [Pipeline] stage 00:00:09.138 [Pipeline] { (Prologue) 00:00:09.161 [Pipeline] echo 00:00:09.163 Node: VM-host-SM9 00:00:09.171 [Pipeline] cleanWs 00:00:09.181 [WS-CLEANUP] Deleting project workspace... 00:00:09.181 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.187 [WS-CLEANUP] done 00:00:09.393 [Pipeline] setCustomBuildProperty 00:00:09.499 [Pipeline] httpRequest 00:00:09.930 [Pipeline] echo 00:00:09.932 Sorcerer 10.211.164.20 is alive 00:00:09.940 [Pipeline] retry 00:00:09.942 [Pipeline] { 00:00:09.958 [Pipeline] httpRequest 00:00:09.963 HttpMethod: GET 00:00:09.964 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.965 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.982 Response Code: HTTP/1.1 200 OK 00:00:09.983 Success: Status code 200 is in the accepted range: 200,404 00:00:09.983 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.126 [Pipeline] } 00:00:11.144 [Pipeline] // retry 00:00:11.151 [Pipeline] sh 00:00:11.431 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.446 [Pipeline] httpRequest 00:00:12.201 [Pipeline] echo 00:00:12.203 Sorcerer 10.211.164.20 is alive 00:00:12.214 [Pipeline] retry 00:00:12.216 [Pipeline] { 00:00:12.232 [Pipeline] httpRequest 00:00:12.236 HttpMethod: GET 00:00:12.237 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:12.237 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:12.251 Response Code: HTTP/1.1 200 OK 00:00:12.252 Success: Status code 200 is in the accepted range: 200,404 00:00:12.252 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:03.039 [Pipeline] } 00:01:03.059 [Pipeline] // retry 00:01:03.068 [Pipeline] sh 00:01:03.349 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:05.893 [Pipeline] sh 00:01:06.172 + git -C spdk log --oneline -n5 00:01:06.172 c13c99a5e test: Various fixes for Fedora40 00:01:06.172 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:06.172 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:06.173 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:06.173 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:06.193 [Pipeline] withCredentials 00:01:06.204 > git --version # timeout=10 00:01:06.218 > git --version # 'git version 2.39.2' 00:01:06.233 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:06.235 [Pipeline] { 00:01:06.245 [Pipeline] retry 00:01:06.247 [Pipeline] { 00:01:06.265 [Pipeline] sh 00:01:06.550 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:10.762 [Pipeline] } 00:01:10.781 [Pipeline] // retry 00:01:10.786 [Pipeline] } 00:01:10.804 [Pipeline] // withCredentials 00:01:10.815 [Pipeline] httpRequest 00:01:11.247 [Pipeline] echo 00:01:11.249 Sorcerer 10.211.164.20 is alive 00:01:11.259 [Pipeline] retry 00:01:11.261 [Pipeline] { 00:01:11.275 [Pipeline] httpRequest 00:01:11.280 HttpMethod: GET 00:01:11.280 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:11.281 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:11.282 Response Code: HTTP/1.1 200 OK 00:01:11.283 Success: Status code 200 is in the accepted range: 200,404 00:01:11.284 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:17.521 [Pipeline] } 00:01:17.539 [Pipeline] // retry 00:01:17.547 [Pipeline] sh 00:01:17.828 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:19.746 [Pipeline] sh 00:01:20.027 + git -C dpdk log --oneline -n5 00:01:20.027 caf0f5d395 version: 22.11.4 00:01:20.027 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:20.027 dc9c799c7d vhost: fix missing spinlock unlock 00:01:20.027 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:20.027 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:20.042 [Pipeline] writeFile 00:01:20.056 [Pipeline] sh 00:01:20.337 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:20.350 [Pipeline] sh 00:01:20.635 + cat autorun-spdk.conf 00:01:20.635 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.635 SPDK_TEST_NVMF=1 00:01:20.635 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.635 SPDK_TEST_USDT=1 00:01:20.635 SPDK_RUN_UBSAN=1 00:01:20.635 SPDK_TEST_NVMF_MDNS=1 00:01:20.635 NET_TYPE=virt 00:01:20.635 SPDK_JSONRPC_GO_CLIENT=1 00:01:20.635 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:20.635 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:20.635 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.642 RUN_NIGHTLY=1 00:01:20.644 [Pipeline] } 00:01:20.657 [Pipeline] // stage 00:01:20.673 [Pipeline] stage 00:01:20.675 [Pipeline] { (Run VM) 00:01:20.689 [Pipeline] sh 00:01:20.970 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:20.970 + echo 'Start stage prepare_nvme.sh' 00:01:20.970 Start stage prepare_nvme.sh 00:01:20.970 + [[ -n 1 ]] 00:01:20.970 + disk_prefix=ex1 00:01:20.970 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:20.970 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:20.970 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:20.970 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.970 ++ SPDK_TEST_NVMF=1 00:01:20.970 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.970 ++ SPDK_TEST_USDT=1 00:01:20.970 ++ SPDK_RUN_UBSAN=1 00:01:20.970 ++ SPDK_TEST_NVMF_MDNS=1 00:01:20.970 ++ NET_TYPE=virt 00:01:20.970 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:20.970 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:20.970 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:20.970 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.970 ++ RUN_NIGHTLY=1 00:01:20.970 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:20.970 + nvme_files=() 00:01:20.970 + declare -A nvme_files 00:01:20.970 + backend_dir=/var/lib/libvirt/images/backends 00:01:20.970 + nvme_files['nvme.img']=5G 00:01:20.970 + nvme_files['nvme-cmb.img']=5G 00:01:20.970 + nvme_files['nvme-multi0.img']=4G 00:01:20.970 + nvme_files['nvme-multi1.img']=4G 00:01:20.970 + nvme_files['nvme-multi2.img']=4G 00:01:20.970 + nvme_files['nvme-openstack.img']=8G 00:01:20.970 + nvme_files['nvme-zns.img']=5G 00:01:20.970 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:20.970 + (( SPDK_TEST_FTL == 1 )) 00:01:20.970 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:20.970 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:20.970 + for nvme in "${!nvme_files[@]}" 00:01:20.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:20.970 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.970 + for nvme in "${!nvme_files[@]}" 00:01:20.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:20.970 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.970 + for nvme in "${!nvme_files[@]}" 00:01:20.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:20.970 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:20.970 + for nvme in "${!nvme_files[@]}" 00:01:20.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:20.970 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.970 + for nvme in "${!nvme_files[@]}" 00:01:20.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:20.970 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.970 + for nvme in "${!nvme_files[@]}" 00:01:20.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:20.970 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.970 + for nvme in "${!nvme_files[@]}" 00:01:20.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:21.230 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.230 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:21.230 + echo 'End stage prepare_nvme.sh' 00:01:21.230 End stage prepare_nvme.sh 00:01:21.242 [Pipeline] sh 00:01:21.524 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:21.524 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:21.524 00:01:21.524 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:21.524 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:21.524 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:21.524 HELP=0 00:01:21.524 DRY_RUN=0 00:01:21.524 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:21.524 NVME_DISKS_TYPE=nvme,nvme, 00:01:21.524 NVME_AUTO_CREATE=0 00:01:21.524 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:21.524 NVME_CMB=,, 00:01:21.524 NVME_PMR=,, 00:01:21.524 NVME_ZNS=,, 00:01:21.524 NVME_MS=,, 00:01:21.524 NVME_FDP=,, 00:01:21.524 SPDK_VAGRANT_DISTRO=fedora39 00:01:21.524 SPDK_VAGRANT_VMCPU=10 00:01:21.524 SPDK_VAGRANT_VMRAM=12288 00:01:21.524 SPDK_VAGRANT_PROVIDER=libvirt 00:01:21.524 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:21.524 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:21.524 SPDK_OPENSTACK_NETWORK=0 00:01:21.524 VAGRANT_PACKAGE_BOX=0 00:01:21.524 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:21.524 FORCE_DISTRO=true 00:01:21.524 VAGRANT_BOX_VERSION= 00:01:21.524 EXTRA_VAGRANTFILES= 00:01:21.524 NIC_MODEL=e1000 00:01:21.524 00:01:21.524 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:21.524 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:24.807 Bringing machine 'default' up with 'libvirt' provider... 00:01:25.067 ==> default: Creating image (snapshot of base box volume). 00:01:25.067 ==> default: Creating domain with the following settings... 00:01:25.067 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732010504_7c7888d6a7eed8cfe2f9 00:01:25.067 ==> default: -- Domain type: kvm 00:01:25.067 ==> default: -- Cpus: 10 00:01:25.067 ==> default: -- Feature: acpi 00:01:25.067 ==> default: -- Feature: apic 00:01:25.067 ==> default: -- Feature: pae 00:01:25.067 ==> default: -- Memory: 12288M 00:01:25.067 ==> default: -- Memory Backing: hugepages: 00:01:25.067 ==> default: -- Management MAC: 00:01:25.067 ==> default: -- Loader: 00:01:25.067 ==> default: -- Nvram: 00:01:25.067 ==> default: -- Base box: spdk/fedora39 00:01:25.067 ==> default: -- Storage pool: default 00:01:25.067 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732010504_7c7888d6a7eed8cfe2f9.img (20G) 00:01:25.067 ==> default: -- Volume Cache: default 00:01:25.067 ==> default: -- Kernel: 00:01:25.067 ==> default: -- Initrd: 00:01:25.067 ==> default: -- Graphics Type: vnc 00:01:25.067 ==> default: -- Graphics Port: -1 00:01:25.067 ==> default: -- Graphics IP: 127.0.0.1 00:01:25.067 ==> default: -- Graphics Password: Not defined 00:01:25.067 ==> default: -- Video Type: cirrus 00:01:25.067 ==> default: -- Video VRAM: 9216 00:01:25.067 ==> default: -- Sound Type: 00:01:25.067 ==> default: -- Keymap: en-us 00:01:25.067 ==> default: -- TPM Path: 00:01:25.067 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:25.067 ==> default: -- Command line args: 00:01:25.067 ==> default: -> value=-device, 00:01:25.067 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:25.067 ==> default: -> value=-drive, 00:01:25.067 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:25.067 ==> default: -> value=-device, 00:01:25.067 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.067 ==> default: -> value=-device, 00:01:25.067 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:25.067 ==> default: -> value=-drive, 00:01:25.067 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:25.067 ==> default: -> value=-device, 00:01:25.067 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.067 ==> default: -> value=-drive, 00:01:25.067 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:25.067 ==> default: -> value=-device, 00:01:25.067 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.067 ==> default: -> value=-drive, 00:01:25.067 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:25.067 ==> default: -> value=-device, 00:01:25.067 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.326 ==> default: Creating shared folders metadata... 00:01:25.326 ==> default: Starting domain. 00:01:26.705 ==> default: Waiting for domain to get an IP address... 00:01:41.581 ==> default: Waiting for SSH to become available... 00:01:42.953 ==> default: Configuring and enabling network interfaces... 00:01:47.139 default: SSH address: 192.168.121.2:22 00:01:47.139 default: SSH username: vagrant 00:01:47.139 default: SSH auth method: private key 00:01:49.698 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:56.260 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:02.820 ==> default: Mounting SSHFS shared folder... 00:02:03.388 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:03.388 ==> default: Checking Mount.. 00:02:04.809 ==> default: Folder Successfully Mounted! 00:02:04.809 ==> default: Running provisioner: file... 00:02:05.394 default: ~/.gitconfig => .gitconfig 00:02:05.962 00:02:05.962 SUCCESS! 00:02:05.962 00:02:05.962 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:05.962 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:05.962 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:05.962 00:02:05.971 [Pipeline] } 00:02:05.986 [Pipeline] // stage 00:02:05.994 [Pipeline] dir 00:02:05.995 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:05.997 [Pipeline] { 00:02:06.009 [Pipeline] catchError 00:02:06.011 [Pipeline] { 00:02:06.024 [Pipeline] sh 00:02:06.302 + vagrant ssh-config --host vagrant 00:02:06.302 + sed -ne /^Host/,$p 00:02:06.302 + tee ssh_conf 00:02:10.491 Host vagrant 00:02:10.491 HostName 192.168.121.2 00:02:10.491 User vagrant 00:02:10.491 Port 22 00:02:10.491 UserKnownHostsFile /dev/null 00:02:10.491 StrictHostKeyChecking no 00:02:10.491 PasswordAuthentication no 00:02:10.491 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:10.491 IdentitiesOnly yes 00:02:10.491 LogLevel FATAL 00:02:10.491 ForwardAgent yes 00:02:10.491 ForwardX11 yes 00:02:10.491 00:02:10.504 [Pipeline] withEnv 00:02:10.507 [Pipeline] { 00:02:10.521 [Pipeline] sh 00:02:10.799 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:10.799 source /etc/os-release 00:02:10.799 [[ -e /image.version ]] && img=$(< /image.version) 00:02:10.799 # Minimal, systemd-like check. 00:02:10.799 if [[ -e /.dockerenv ]]; then 00:02:10.799 # Clear garbage from the node's name: 00:02:10.799 # agt-er_autotest_547-896 -> autotest_547-896 00:02:10.799 # $HOSTNAME is the actual container id 00:02:10.799 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:10.799 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:10.799 # We can assume this is a mount from a host where container is running, 00:02:10.799 # so fetch its hostname to easily identify the target swarm worker. 00:02:10.799 container="$(< /etc/hostname) ($agent)" 00:02:10.799 else 00:02:10.799 # Fallback 00:02:10.799 container=$agent 00:02:10.799 fi 00:02:10.799 fi 00:02:10.799 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:10.799 00:02:10.809 [Pipeline] } 00:02:10.825 [Pipeline] // withEnv 00:02:10.836 [Pipeline] setCustomBuildProperty 00:02:10.851 [Pipeline] stage 00:02:10.854 [Pipeline] { (Tests) 00:02:10.872 [Pipeline] sh 00:02:11.151 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:11.424 [Pipeline] sh 00:02:11.801 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:11.817 [Pipeline] timeout 00:02:11.817 Timeout set to expire in 1 hr 0 min 00:02:11.820 [Pipeline] { 00:02:11.834 [Pipeline] sh 00:02:12.112 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:12.680 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:12.691 [Pipeline] sh 00:02:12.970 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:13.242 [Pipeline] sh 00:02:13.521 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:13.535 [Pipeline] sh 00:02:13.813 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:13.813 ++ readlink -f spdk_repo 00:02:13.813 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:13.813 + [[ -n /home/vagrant/spdk_repo ]] 00:02:13.813 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:13.813 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:13.813 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:13.813 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:14.072 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:14.072 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:14.072 + cd /home/vagrant/spdk_repo 00:02:14.072 + source /etc/os-release 00:02:14.072 ++ NAME='Fedora Linux' 00:02:14.072 ++ VERSION='39 (Cloud Edition)' 00:02:14.072 ++ ID=fedora 00:02:14.072 ++ VERSION_ID=39 00:02:14.072 ++ VERSION_CODENAME= 00:02:14.072 ++ PLATFORM_ID=platform:f39 00:02:14.072 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:14.072 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:14.072 ++ LOGO=fedora-logo-icon 00:02:14.072 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:14.072 ++ HOME_URL=https://fedoraproject.org/ 00:02:14.072 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:14.072 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:14.072 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:14.072 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:14.072 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:14.072 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:14.072 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:14.072 ++ SUPPORT_END=2024-11-12 00:02:14.072 ++ VARIANT='Cloud Edition' 00:02:14.072 ++ VARIANT_ID=cloud 00:02:14.072 + uname -a 00:02:14.072 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:14.072 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:14.072 Hugepages 00:02:14.072 node hugesize free / total 00:02:14.072 node0 1048576kB 0 / 0 00:02:14.072 node0 2048kB 0 / 0 00:02:14.072 00:02:14.072 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:14.072 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:14.072 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:14.072 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:14.072 + rm -f /tmp/spdk-ld-path 00:02:14.072 + source autorun-spdk.conf 00:02:14.072 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.072 ++ SPDK_TEST_NVMF=1 00:02:14.072 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:14.072 ++ SPDK_TEST_USDT=1 00:02:14.072 ++ SPDK_RUN_UBSAN=1 00:02:14.072 ++ SPDK_TEST_NVMF_MDNS=1 00:02:14.072 ++ NET_TYPE=virt 00:02:14.072 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:14.072 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:14.072 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:14.072 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.072 ++ RUN_NIGHTLY=1 00:02:14.072 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:14.072 + [[ -n '' ]] 00:02:14.072 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:14.072 + for M in /var/spdk/build-*-manifest.txt 00:02:14.072 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:14.072 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.072 + for M in /var/spdk/build-*-manifest.txt 00:02:14.072 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:14.072 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.072 + for M in /var/spdk/build-*-manifest.txt 00:02:14.072 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:14.072 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.072 ++ uname 00:02:14.072 + [[ Linux == \L\i\n\u\x ]] 00:02:14.072 + sudo dmesg -T 00:02:14.331 + sudo dmesg --clear 00:02:14.331 + dmesg_pid=5959 00:02:14.331 + sudo dmesg -Tw 00:02:14.331 + [[ Fedora Linux == FreeBSD ]] 00:02:14.331 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.331 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.331 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:14.331 + [[ -x /usr/src/fio-static/fio ]] 00:02:14.331 + export FIO_BIN=/usr/src/fio-static/fio 00:02:14.331 + FIO_BIN=/usr/src/fio-static/fio 00:02:14.331 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:14.331 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:14.331 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:14.331 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:14.331 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:14.331 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:14.331 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:14.331 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:14.331 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:14.331 Test configuration: 00:02:14.331 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.331 SPDK_TEST_NVMF=1 00:02:14.331 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:14.331 SPDK_TEST_USDT=1 00:02:14.331 SPDK_RUN_UBSAN=1 00:02:14.331 SPDK_TEST_NVMF_MDNS=1 00:02:14.331 NET_TYPE=virt 00:02:14.331 SPDK_JSONRPC_GO_CLIENT=1 00:02:14.331 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:14.331 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:14.331 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.331 RUN_NIGHTLY=1 10:02:33 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:14.331 10:02:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:14.331 10:02:33 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:14.332 10:02:33 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:14.332 10:02:33 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:14.332 10:02:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.332 10:02:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.332 10:02:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.332 10:02:33 -- paths/export.sh@5 -- $ export PATH 00:02:14.332 10:02:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.332 10:02:33 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:14.332 10:02:33 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:14.332 10:02:33 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732010553.XXXXXX 00:02:14.332 10:02:33 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732010553.jXJAof 00:02:14.332 10:02:33 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:14.332 10:02:33 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:02:14.332 10:02:33 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:14.332 10:02:33 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:14.332 10:02:33 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:14.332 10:02:33 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:14.332 10:02:33 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:14.332 10:02:33 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:14.332 10:02:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.332 10:02:33 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:14.332 10:02:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:14.332 10:02:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:14.332 10:02:33 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:14.332 10:02:33 -- spdk/autobuild.sh@16 -- $ date -u 00:02:14.332 Tue Nov 19 10:02:33 AM UTC 2024 00:02:14.332 10:02:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:14.332 LTS-67-gc13c99a5e 00:02:14.332 10:02:33 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:14.332 10:02:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:14.332 10:02:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:14.332 10:02:33 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:14.332 10:02:33 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:14.332 10:02:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.332 ************************************ 00:02:14.332 START TEST ubsan 00:02:14.332 ************************************ 00:02:14.332 using ubsan 00:02:14.332 10:02:33 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:14.332 00:02:14.332 real 0m0.000s 00:02:14.332 user 0m0.000s 00:02:14.332 sys 0m0.000s 00:02:14.332 10:02:33 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:14.332 ************************************ 00:02:14.332 END TEST ubsan 00:02:14.332 ************************************ 00:02:14.332 10:02:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.332 10:02:33 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:14.332 10:02:33 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:14.332 10:02:33 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:14.332 10:02:33 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:14.332 10:02:33 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:14.332 10:02:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.332 ************************************ 00:02:14.332 START TEST build_native_dpdk 00:02:14.332 ************************************ 00:02:14.332 10:02:33 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:14.332 10:02:33 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:14.332 10:02:33 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:14.332 10:02:33 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:14.332 10:02:33 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:14.332 10:02:33 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:14.332 10:02:33 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:14.332 10:02:33 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:14.332 10:02:33 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:14.332 10:02:33 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:14.332 10:02:33 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:14.332 10:02:33 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:14.332 10:02:33 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:14.591 10:02:33 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:14.591 10:02:33 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:14.591 10:02:33 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:14.591 10:02:33 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:14.591 10:02:33 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:14.591 10:02:33 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:14.591 10:02:33 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:14.591 10:02:33 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:14.591 caf0f5d395 version: 22.11.4 00:02:14.591 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:14.591 dc9c799c7d vhost: fix missing spinlock unlock 00:02:14.591 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:14.591 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:14.591 10:02:33 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:14.591 10:02:33 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:14.591 10:02:33 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:14.591 10:02:33 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:14.591 10:02:33 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:14.591 10:02:33 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:14.591 10:02:33 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:14.591 10:02:33 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:14.591 10:02:33 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:14.591 10:02:33 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:14.591 10:02:33 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:14.591 10:02:33 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:14.591 10:02:33 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:14.591 10:02:33 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:14.591 10:02:33 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:14.591 10:02:33 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:14.591 10:02:33 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:14.591 10:02:33 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:14.591 10:02:33 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:14.591 10:02:33 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:14.591 10:02:33 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:14.591 10:02:33 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:14.591 10:02:33 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:14.591 10:02:33 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:14.591 10:02:33 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:14.591 10:02:33 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:14.591 10:02:33 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:14.591 10:02:33 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:14.591 10:02:33 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:14.591 10:02:33 -- scripts/common.sh@343 -- $ case "$op" in 00:02:14.591 10:02:33 -- scripts/common.sh@344 -- $ : 1 00:02:14.591 10:02:33 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:14.591 10:02:33 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:14.591 10:02:33 -- scripts/common.sh@364 -- $ decimal 22 00:02:14.591 10:02:33 -- scripts/common.sh@352 -- $ local d=22 00:02:14.591 10:02:33 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:14.591 10:02:33 -- scripts/common.sh@354 -- $ echo 22 00:02:14.591 10:02:33 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:14.591 10:02:33 -- scripts/common.sh@365 -- $ decimal 21 00:02:14.591 10:02:33 -- scripts/common.sh@352 -- $ local d=21 00:02:14.591 10:02:33 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:14.591 10:02:33 -- scripts/common.sh@354 -- $ echo 21 00:02:14.591 10:02:33 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:14.591 10:02:33 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:14.591 10:02:33 -- scripts/common.sh@366 -- $ return 1 00:02:14.591 10:02:33 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:14.591 patching file config/rte_config.h 00:02:14.591 Hunk #1 succeeded at 60 (offset 1 line). 00:02:14.591 10:02:33 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:14.591 10:02:33 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:14.591 10:02:33 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:14.591 10:02:33 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:14.591 10:02:33 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:14.591 10:02:33 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:14.591 10:02:33 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:14.591 10:02:33 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:14.591 10:02:33 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:14.591 10:02:33 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:14.591 10:02:33 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:14.591 10:02:33 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:14.591 10:02:33 -- scripts/common.sh@343 -- $ case "$op" in 00:02:14.591 10:02:33 -- scripts/common.sh@344 -- $ : 1 00:02:14.591 10:02:33 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:14.591 10:02:33 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:14.591 10:02:33 -- scripts/common.sh@364 -- $ decimal 22 00:02:14.591 10:02:33 -- scripts/common.sh@352 -- $ local d=22 00:02:14.591 10:02:33 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:14.591 10:02:33 -- scripts/common.sh@354 -- $ echo 22 00:02:14.591 10:02:33 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:14.591 10:02:33 -- scripts/common.sh@365 -- $ decimal 24 00:02:14.591 10:02:33 -- scripts/common.sh@352 -- $ local d=24 00:02:14.591 10:02:33 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:14.591 10:02:33 -- scripts/common.sh@354 -- $ echo 24 00:02:14.591 10:02:33 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:14.591 10:02:33 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:14.591 10:02:33 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:14.591 10:02:33 -- scripts/common.sh@367 -- $ return 0 00:02:14.591 10:02:33 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:14.591 patching file lib/pcapng/rte_pcapng.c 00:02:14.591 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:14.591 10:02:33 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:14.591 10:02:33 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:14.591 10:02:33 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:14.591 10:02:33 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:14.591 10:02:33 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:19.867 The Meson build system 00:02:19.867 Version: 1.5.0 00:02:19.867 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:19.867 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:19.867 Build type: native build 00:02:19.867 Program cat found: YES (/usr/bin/cat) 00:02:19.867 Project name: DPDK 00:02:19.867 Project version: 22.11.4 00:02:19.867 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:19.867 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:19.867 Host machine cpu family: x86_64 00:02:19.867 Host machine cpu: x86_64 00:02:19.867 Message: ## Building in Developer Mode ## 00:02:19.867 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:19.867 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:19.867 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:19.867 Program objdump found: YES (/usr/bin/objdump) 00:02:19.867 Program python3 found: YES (/usr/bin/python3) 00:02:19.867 Program cat found: YES (/usr/bin/cat) 00:02:19.867 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:19.867 Checking for size of "void *" : 8 00:02:19.867 Checking for size of "void *" : 8 (cached) 00:02:19.867 Library m found: YES 00:02:19.867 Library numa found: YES 00:02:19.867 Has header "numaif.h" : YES 00:02:19.867 Library fdt found: NO 00:02:19.867 Library execinfo found: NO 00:02:19.867 Has header "execinfo.h" : YES 00:02:19.867 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:19.867 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:19.867 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:19.867 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:19.867 Run-time dependency openssl found: YES 3.1.1 00:02:19.867 Run-time dependency libpcap found: YES 1.10.4 00:02:19.867 Has header "pcap.h" with dependency libpcap: YES 00:02:19.867 Compiler for C supports arguments -Wcast-qual: YES 00:02:19.867 Compiler for C supports arguments -Wdeprecated: YES 00:02:19.867 Compiler for C supports arguments -Wformat: YES 00:02:19.867 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:19.867 Compiler for C supports arguments -Wformat-security: NO 00:02:19.867 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:19.867 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:19.867 Compiler for C supports arguments -Wnested-externs: YES 00:02:19.867 Compiler for C supports arguments -Wold-style-definition: YES 00:02:19.867 Compiler for C supports arguments -Wpointer-arith: YES 00:02:19.867 Compiler for C supports arguments -Wsign-compare: YES 00:02:19.867 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:19.867 Compiler for C supports arguments -Wundef: YES 00:02:19.867 Compiler for C supports arguments -Wwrite-strings: YES 00:02:19.867 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:19.867 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:19.867 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:19.867 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:19.867 Compiler for C supports arguments -mavx512f: YES 00:02:19.867 Checking if "AVX512 checking" compiles: YES 00:02:19.867 Fetching value of define "__SSE4_2__" : 1 00:02:19.867 Fetching value of define "__AES__" : 1 00:02:19.867 Fetching value of define "__AVX__" : 1 00:02:19.867 Fetching value of define "__AVX2__" : 1 00:02:19.867 Fetching value of define "__AVX512BW__" : (undefined) 00:02:19.867 Fetching value of define "__AVX512CD__" : (undefined) 00:02:19.867 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:19.867 Fetching value of define "__AVX512F__" : (undefined) 00:02:19.867 Fetching value of define "__AVX512VL__" : (undefined) 00:02:19.867 Fetching value of define "__PCLMUL__" : 1 00:02:19.867 Fetching value of define "__RDRND__" : 1 00:02:19.867 Fetching value of define "__RDSEED__" : 1 00:02:19.867 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:19.867 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:19.867 Message: lib/kvargs: Defining dependency "kvargs" 00:02:19.867 Message: lib/telemetry: Defining dependency "telemetry" 00:02:19.867 Checking for function "getentropy" : YES 00:02:19.867 Message: lib/eal: Defining dependency "eal" 00:02:19.867 Message: lib/ring: Defining dependency "ring" 00:02:19.867 Message: lib/rcu: Defining dependency "rcu" 00:02:19.867 Message: lib/mempool: Defining dependency "mempool" 00:02:19.867 Message: lib/mbuf: Defining dependency "mbuf" 00:02:19.867 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:19.867 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.867 Compiler for C supports arguments -mpclmul: YES 00:02:19.867 Compiler for C supports arguments -maes: YES 00:02:19.867 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:19.867 Compiler for C supports arguments -mavx512bw: YES 00:02:19.867 Compiler for C supports arguments -mavx512dq: YES 00:02:19.867 Compiler for C supports arguments -mavx512vl: YES 00:02:19.867 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:19.867 Compiler for C supports arguments -mavx2: YES 00:02:19.867 Compiler for C supports arguments -mavx: YES 00:02:19.867 Message: lib/net: Defining dependency "net" 00:02:19.867 Message: lib/meter: Defining dependency "meter" 00:02:19.867 Message: lib/ethdev: Defining dependency "ethdev" 00:02:19.867 Message: lib/pci: Defining dependency "pci" 00:02:19.867 Message: lib/cmdline: Defining dependency "cmdline" 00:02:19.867 Message: lib/metrics: Defining dependency "metrics" 00:02:19.867 Message: lib/hash: Defining dependency "hash" 00:02:19.867 Message: lib/timer: Defining dependency "timer" 00:02:19.867 Fetching value of define "__AVX2__" : 1 (cached) 00:02:19.867 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.868 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:19.868 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:19.868 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:19.868 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:19.868 Message: lib/acl: Defining dependency "acl" 00:02:19.868 Message: lib/bbdev: Defining dependency "bbdev" 00:02:19.868 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:19.868 Run-time dependency libelf found: YES 0.191 00:02:19.868 Message: lib/bpf: Defining dependency "bpf" 00:02:19.868 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:19.868 Message: lib/compressdev: Defining dependency "compressdev" 00:02:19.868 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:19.868 Message: lib/distributor: Defining dependency "distributor" 00:02:19.868 Message: lib/efd: Defining dependency "efd" 00:02:19.868 Message: lib/eventdev: Defining dependency "eventdev" 00:02:19.868 Message: lib/gpudev: Defining dependency "gpudev" 00:02:19.868 Message: lib/gro: Defining dependency "gro" 00:02:19.868 Message: lib/gso: Defining dependency "gso" 00:02:19.868 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:19.868 Message: lib/jobstats: Defining dependency "jobstats" 00:02:19.868 Message: lib/latencystats: Defining dependency "latencystats" 00:02:19.868 Message: lib/lpm: Defining dependency "lpm" 00:02:19.868 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.868 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:19.868 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:19.868 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:19.868 Message: lib/member: Defining dependency "member" 00:02:19.868 Message: lib/pcapng: Defining dependency "pcapng" 00:02:19.868 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:19.868 Message: lib/power: Defining dependency "power" 00:02:19.868 Message: lib/rawdev: Defining dependency "rawdev" 00:02:19.868 Message: lib/regexdev: Defining dependency "regexdev" 00:02:19.868 Message: lib/dmadev: Defining dependency "dmadev" 00:02:19.868 Message: lib/rib: Defining dependency "rib" 00:02:19.868 Message: lib/reorder: Defining dependency "reorder" 00:02:19.868 Message: lib/sched: Defining dependency "sched" 00:02:19.868 Message: lib/security: Defining dependency "security" 00:02:19.868 Message: lib/stack: Defining dependency "stack" 00:02:19.868 Has header "linux/userfaultfd.h" : YES 00:02:19.868 Message: lib/vhost: Defining dependency "vhost" 00:02:19.868 Message: lib/ipsec: Defining dependency "ipsec" 00:02:19.868 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.868 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:19.868 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:19.868 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:19.868 Message: lib/fib: Defining dependency "fib" 00:02:19.868 Message: lib/port: Defining dependency "port" 00:02:19.868 Message: lib/pdump: Defining dependency "pdump" 00:02:19.868 Message: lib/table: Defining dependency "table" 00:02:19.868 Message: lib/pipeline: Defining dependency "pipeline" 00:02:19.868 Message: lib/graph: Defining dependency "graph" 00:02:19.868 Message: lib/node: Defining dependency "node" 00:02:19.868 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:19.868 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:19.868 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:19.868 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:19.868 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:19.868 Compiler for C supports arguments -Wno-unused-value: YES 00:02:19.868 Compiler for C supports arguments -Wno-format: YES 00:02:19.868 Compiler for C supports arguments -Wno-format-security: YES 00:02:19.868 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:21.771 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:21.771 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:21.771 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:21.771 Fetching value of define "__AVX2__" : 1 (cached) 00:02:21.771 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:21.771 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.771 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:21.771 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:21.771 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:21.771 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:21.771 Configuring doxy-api.conf using configuration 00:02:21.771 Program sphinx-build found: NO 00:02:21.771 Configuring rte_build_config.h using configuration 00:02:21.771 Message: 00:02:21.771 ================= 00:02:21.771 Applications Enabled 00:02:21.771 ================= 00:02:21.771 00:02:21.771 apps: 00:02:21.771 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:21.771 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:21.771 test-security-perf, 00:02:21.771 00:02:21.771 Message: 00:02:21.771 ================= 00:02:21.771 Libraries Enabled 00:02:21.771 ================= 00:02:21.771 00:02:21.771 libs: 00:02:21.771 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:21.771 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:21.771 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:21.771 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:21.771 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:21.771 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:21.771 table, pipeline, graph, node, 00:02:21.771 00:02:21.771 Message: 00:02:21.771 =============== 00:02:21.772 Drivers Enabled 00:02:21.772 =============== 00:02:21.772 00:02:21.772 common: 00:02:21.772 00:02:21.772 bus: 00:02:21.772 pci, vdev, 00:02:21.772 mempool: 00:02:21.772 ring, 00:02:21.772 dma: 00:02:21.772 00:02:21.772 net: 00:02:21.772 i40e, 00:02:21.772 raw: 00:02:21.772 00:02:21.772 crypto: 00:02:21.772 00:02:21.772 compress: 00:02:21.772 00:02:21.772 regex: 00:02:21.772 00:02:21.772 vdpa: 00:02:21.772 00:02:21.772 event: 00:02:21.772 00:02:21.772 baseband: 00:02:21.772 00:02:21.772 gpu: 00:02:21.772 00:02:21.772 00:02:21.772 Message: 00:02:21.772 ================= 00:02:21.772 Content Skipped 00:02:21.772 ================= 00:02:21.772 00:02:21.772 apps: 00:02:21.772 00:02:21.772 libs: 00:02:21.772 kni: explicitly disabled via build config (deprecated lib) 00:02:21.772 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:21.772 00:02:21.772 drivers: 00:02:21.772 common/cpt: not in enabled drivers build config 00:02:21.772 common/dpaax: not in enabled drivers build config 00:02:21.772 common/iavf: not in enabled drivers build config 00:02:21.772 common/idpf: not in enabled drivers build config 00:02:21.772 common/mvep: not in enabled drivers build config 00:02:21.772 common/octeontx: not in enabled drivers build config 00:02:21.772 bus/auxiliary: not in enabled drivers build config 00:02:21.772 bus/dpaa: not in enabled drivers build config 00:02:21.772 bus/fslmc: not in enabled drivers build config 00:02:21.772 bus/ifpga: not in enabled drivers build config 00:02:21.772 bus/vmbus: not in enabled drivers build config 00:02:21.772 common/cnxk: not in enabled drivers build config 00:02:21.772 common/mlx5: not in enabled drivers build config 00:02:21.772 common/qat: not in enabled drivers build config 00:02:21.772 common/sfc_efx: not in enabled drivers build config 00:02:21.772 mempool/bucket: not in enabled drivers build config 00:02:21.772 mempool/cnxk: not in enabled drivers build config 00:02:21.772 mempool/dpaa: not in enabled drivers build config 00:02:21.772 mempool/dpaa2: not in enabled drivers build config 00:02:21.772 mempool/octeontx: not in enabled drivers build config 00:02:21.772 mempool/stack: not in enabled drivers build config 00:02:21.772 dma/cnxk: not in enabled drivers build config 00:02:21.772 dma/dpaa: not in enabled drivers build config 00:02:21.772 dma/dpaa2: not in enabled drivers build config 00:02:21.772 dma/hisilicon: not in enabled drivers build config 00:02:21.772 dma/idxd: not in enabled drivers build config 00:02:21.772 dma/ioat: not in enabled drivers build config 00:02:21.772 dma/skeleton: not in enabled drivers build config 00:02:21.772 net/af_packet: not in enabled drivers build config 00:02:21.772 net/af_xdp: not in enabled drivers build config 00:02:21.772 net/ark: not in enabled drivers build config 00:02:21.772 net/atlantic: not in enabled drivers build config 00:02:21.772 net/avp: not in enabled drivers build config 00:02:21.772 net/axgbe: not in enabled drivers build config 00:02:21.772 net/bnx2x: not in enabled drivers build config 00:02:21.772 net/bnxt: not in enabled drivers build config 00:02:21.772 net/bonding: not in enabled drivers build config 00:02:21.772 net/cnxk: not in enabled drivers build config 00:02:21.772 net/cxgbe: not in enabled drivers build config 00:02:21.772 net/dpaa: not in enabled drivers build config 00:02:21.772 net/dpaa2: not in enabled drivers build config 00:02:21.772 net/e1000: not in enabled drivers build config 00:02:21.772 net/ena: not in enabled drivers build config 00:02:21.772 net/enetc: not in enabled drivers build config 00:02:21.772 net/enetfec: not in enabled drivers build config 00:02:21.772 net/enic: not in enabled drivers build config 00:02:21.772 net/failsafe: not in enabled drivers build config 00:02:21.772 net/fm10k: not in enabled drivers build config 00:02:21.772 net/gve: not in enabled drivers build config 00:02:21.772 net/hinic: not in enabled drivers build config 00:02:21.772 net/hns3: not in enabled drivers build config 00:02:21.772 net/iavf: not in enabled drivers build config 00:02:21.772 net/ice: not in enabled drivers build config 00:02:21.772 net/idpf: not in enabled drivers build config 00:02:21.772 net/igc: not in enabled drivers build config 00:02:21.772 net/ionic: not in enabled drivers build config 00:02:21.772 net/ipn3ke: not in enabled drivers build config 00:02:21.772 net/ixgbe: not in enabled drivers build config 00:02:21.772 net/kni: not in enabled drivers build config 00:02:21.772 net/liquidio: not in enabled drivers build config 00:02:21.772 net/mana: not in enabled drivers build config 00:02:21.772 net/memif: not in enabled drivers build config 00:02:21.772 net/mlx4: not in enabled drivers build config 00:02:21.772 net/mlx5: not in enabled drivers build config 00:02:21.772 net/mvneta: not in enabled drivers build config 00:02:21.772 net/mvpp2: not in enabled drivers build config 00:02:21.772 net/netvsc: not in enabled drivers build config 00:02:21.772 net/nfb: not in enabled drivers build config 00:02:21.772 net/nfp: not in enabled drivers build config 00:02:21.772 net/ngbe: not in enabled drivers build config 00:02:21.772 net/null: not in enabled drivers build config 00:02:21.772 net/octeontx: not in enabled drivers build config 00:02:21.772 net/octeon_ep: not in enabled drivers build config 00:02:21.772 net/pcap: not in enabled drivers build config 00:02:21.772 net/pfe: not in enabled drivers build config 00:02:21.772 net/qede: not in enabled drivers build config 00:02:21.772 net/ring: not in enabled drivers build config 00:02:21.772 net/sfc: not in enabled drivers build config 00:02:21.772 net/softnic: not in enabled drivers build config 00:02:21.772 net/tap: not in enabled drivers build config 00:02:21.772 net/thunderx: not in enabled drivers build config 00:02:21.772 net/txgbe: not in enabled drivers build config 00:02:21.772 net/vdev_netvsc: not in enabled drivers build config 00:02:21.772 net/vhost: not in enabled drivers build config 00:02:21.772 net/virtio: not in enabled drivers build config 00:02:21.772 net/vmxnet3: not in enabled drivers build config 00:02:21.772 raw/cnxk_bphy: not in enabled drivers build config 00:02:21.772 raw/cnxk_gpio: not in enabled drivers build config 00:02:21.772 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:21.772 raw/ifpga: not in enabled drivers build config 00:02:21.772 raw/ntb: not in enabled drivers build config 00:02:21.772 raw/skeleton: not in enabled drivers build config 00:02:21.772 crypto/armv8: not in enabled drivers build config 00:02:21.772 crypto/bcmfs: not in enabled drivers build config 00:02:21.772 crypto/caam_jr: not in enabled drivers build config 00:02:21.772 crypto/ccp: not in enabled drivers build config 00:02:21.772 crypto/cnxk: not in enabled drivers build config 00:02:21.772 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.772 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.772 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.772 crypto/mlx5: not in enabled drivers build config 00:02:21.772 crypto/mvsam: not in enabled drivers build config 00:02:21.772 crypto/nitrox: not in enabled drivers build config 00:02:21.772 crypto/null: not in enabled drivers build config 00:02:21.772 crypto/octeontx: not in enabled drivers build config 00:02:21.772 crypto/openssl: not in enabled drivers build config 00:02:21.772 crypto/scheduler: not in enabled drivers build config 00:02:21.772 crypto/uadk: not in enabled drivers build config 00:02:21.772 crypto/virtio: not in enabled drivers build config 00:02:21.772 compress/isal: not in enabled drivers build config 00:02:21.772 compress/mlx5: not in enabled drivers build config 00:02:21.772 compress/octeontx: not in enabled drivers build config 00:02:21.772 compress/zlib: not in enabled drivers build config 00:02:21.772 regex/mlx5: not in enabled drivers build config 00:02:21.772 regex/cn9k: not in enabled drivers build config 00:02:21.772 vdpa/ifc: not in enabled drivers build config 00:02:21.772 vdpa/mlx5: not in enabled drivers build config 00:02:21.772 vdpa/sfc: not in enabled drivers build config 00:02:21.772 event/cnxk: not in enabled drivers build config 00:02:21.772 event/dlb2: not in enabled drivers build config 00:02:21.772 event/dpaa: not in enabled drivers build config 00:02:21.772 event/dpaa2: not in enabled drivers build config 00:02:21.772 event/dsw: not in enabled drivers build config 00:02:21.772 event/opdl: not in enabled drivers build config 00:02:21.772 event/skeleton: not in enabled drivers build config 00:02:21.772 event/sw: not in enabled drivers build config 00:02:21.772 event/octeontx: not in enabled drivers build config 00:02:21.772 baseband/acc: not in enabled drivers build config 00:02:21.772 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:21.772 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:21.772 baseband/la12xx: not in enabled drivers build config 00:02:21.772 baseband/null: not in enabled drivers build config 00:02:21.772 baseband/turbo_sw: not in enabled drivers build config 00:02:21.772 gpu/cuda: not in enabled drivers build config 00:02:21.772 00:02:21.772 00:02:21.772 Build targets in project: 314 00:02:21.772 00:02:21.772 DPDK 22.11.4 00:02:21.772 00:02:21.772 User defined options 00:02:21.772 libdir : lib 00:02:21.772 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:21.772 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:21.772 c_link_args : 00:02:21.772 enable_docs : false 00:02:21.772 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:21.772 enable_kmods : false 00:02:21.772 machine : native 00:02:21.772 tests : false 00:02:21.772 00:02:21.772 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.772 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:22.030 10:02:41 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:22.030 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:22.030 [1/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:22.030 [2/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:22.030 [3/743] Generating lib/rte_telemetry_def with a custom command 00:02:22.030 [4/743] Generating lib/rte_kvargs_def with a custom command 00:02:22.030 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:22.030 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:22.030 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:22.288 [8/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:22.288 [9/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:22.288 [10/743] Linking static target lib/librte_kvargs.a 00:02:22.288 [11/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:22.288 [12/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:22.288 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:22.288 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:22.288 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:22.288 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:22.288 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:22.288 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:22.288 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:22.547 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.547 [21/743] Linking target lib/librte_kvargs.so.23.0 00:02:22.547 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:22.547 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:22.547 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:22.547 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:22.547 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:22.547 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:22.547 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:22.547 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:22.806 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:22.806 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:22.806 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:22.806 [33/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:22.806 [34/743] Linking static target lib/librte_telemetry.a 00:02:22.806 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:22.806 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:22.806 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:22.806 [38/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:22.806 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:22.806 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:22.806 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:23.065 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:23.065 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:23.065 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:23.065 [45/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.065 [46/743] Linking target lib/librte_telemetry.so.23.0 00:02:23.065 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:23.323 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:23.323 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:23.323 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:23.323 [51/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:23.323 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:23.323 [53/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:23.323 [54/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:23.323 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:23.323 [56/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:23.323 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:23.323 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:23.323 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:23.323 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:23.323 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:23.323 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:23.323 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:23.582 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:23.582 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:23.582 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:23.582 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:23.582 [68/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:23.582 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:23.582 [70/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:23.582 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:23.582 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:23.582 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:23.582 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:23.582 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:23.582 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:23.841 [77/743] Generating lib/rte_eal_def with a custom command 00:02:23.841 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:23.841 [79/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:23.841 [80/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:23.841 [81/743] Generating lib/rte_ring_def with a custom command 00:02:23.841 [82/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:23.841 [83/743] Generating lib/rte_ring_mingw with a custom command 00:02:23.841 [84/743] Generating lib/rte_rcu_def with a custom command 00:02:23.841 [85/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:23.841 [86/743] Generating lib/rte_rcu_mingw with a custom command 00:02:23.841 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:23.841 [88/743] Linking static target lib/librte_ring.a 00:02:23.841 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:23.841 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:24.100 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:24.100 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.100 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:24.100 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.358 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:24.358 [96/743] Linking static target lib/librte_eal.a 00:02:24.616 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.616 [98/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:24.616 [99/743] Generating lib/rte_mbuf_def with a custom command 00:02:24.616 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:24.616 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.616 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:24.616 [103/743] Linking static target lib/librte_rcu.a 00:02:24.616 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.875 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.875 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:24.875 [107/743] Linking static target lib/librte_mempool.a 00:02:24.875 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.875 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:25.133 [110/743] Generating lib/rte_net_def with a custom command 00:02:25.133 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:25.133 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:25.133 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:25.133 [114/743] Generating lib/rte_meter_def with a custom command 00:02:25.133 [115/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:25.133 [116/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:25.133 [117/743] Generating lib/rte_meter_mingw with a custom command 00:02:25.392 [118/743] Linking static target lib/librte_meter.a 00:02:25.392 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:25.392 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:25.392 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:25.649 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.649 [123/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:25.649 [124/743] Linking static target lib/librte_net.a 00:02:25.649 [125/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:25.649 [126/743] Linking static target lib/librte_mbuf.a 00:02:25.649 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.918 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.179 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:26.179 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:26.179 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:26.179 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:26.179 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:26.179 [134/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.438 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:27.004 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:27.004 [137/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:27.004 [138/743] Generating lib/rte_ethdev_def with a custom command 00:02:27.004 [139/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:27.004 [140/743] Generating lib/rte_pci_def with a custom command 00:02:27.004 [141/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:27.004 [142/743] Generating lib/rte_pci_mingw with a custom command 00:02:27.004 [143/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:27.004 [144/743] Linking static target lib/librte_pci.a 00:02:27.004 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:27.004 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:27.004 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:27.004 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:27.262 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:27.262 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.262 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:27.263 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:27.263 [153/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:27.263 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:27.263 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:27.263 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:27.263 [157/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:27.263 [158/743] Generating lib/rte_cmdline_def with a custom command 00:02:27.263 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:27.263 [160/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:27.263 [161/743] Generating lib/rte_metrics_def with a custom command 00:02:27.521 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:27.521 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:27.521 [164/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:27.521 [165/743] Generating lib/rte_hash_def with a custom command 00:02:27.521 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:27.521 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:27.521 [168/743] Generating lib/rte_timer_def with a custom command 00:02:27.521 [169/743] Generating lib/rte_timer_mingw with a custom command 00:02:27.779 [170/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:27.779 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:27.779 [172/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:27.779 [173/743] Linking static target lib/librte_cmdline.a 00:02:28.037 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:28.037 [175/743] Linking static target lib/librte_metrics.a 00:02:28.037 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:28.037 [177/743] Linking static target lib/librte_timer.a 00:02:28.604 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.604 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.604 [180/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:28.604 [181/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:28.870 [182/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.870 [183/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:28.870 [184/743] Linking static target lib/librte_ethdev.a 00:02:29.171 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:29.171 [186/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:29.171 [187/743] Generating lib/rte_acl_mingw with a custom command 00:02:29.171 [188/743] Generating lib/rte_acl_def with a custom command 00:02:29.171 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:29.429 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:29.429 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:29.429 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:29.687 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:29.687 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:29.944 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:29.944 [196/743] Linking static target lib/librte_bitratestats.a 00:02:30.203 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:30.203 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.203 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:30.203 [200/743] Linking static target lib/librte_bbdev.a 00:02:30.466 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:30.466 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:30.466 [203/743] Linking static target lib/librte_hash.a 00:02:30.726 [204/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:30.726 [205/743] Linking static target lib/acl/libavx512_tmp.a 00:02:30.726 [206/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:30.985 [207/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:30.985 [208/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.985 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:31.243 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.243 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:31.243 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:02:31.243 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:31.243 [214/743] Generating lib/rte_cfgfile_def with a custom command 00:02:31.502 [215/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:31.502 [216/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:31.502 [217/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:31.502 [218/743] Linking static target lib/librte_cfgfile.a 00:02:31.502 [219/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:31.502 [220/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:31.761 [221/743] Linking static target lib/librte_acl.a 00:02:31.761 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:32.020 [223/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.020 [224/743] Generating lib/rte_compressdev_def with a custom command 00:02:32.020 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.020 [226/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:32.020 [227/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.020 [228/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:32.020 [229/743] Linking target lib/librte_eal.so.23.0 00:02:32.020 [230/743] Generating lib/rte_cryptodev_def with a custom command 00:02:32.020 [231/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:32.020 [232/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:32.278 [233/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:32.278 [234/743] Linking target lib/librte_ring.so.23.0 00:02:32.278 [235/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:32.278 [236/743] Linking target lib/librte_meter.so.23.0 00:02:32.278 [237/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:32.278 [238/743] Linking target lib/librte_pci.so.23.0 00:02:32.278 [239/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:32.278 [240/743] Linking target lib/librte_rcu.so.23.0 00:02:32.537 [241/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:32.537 [242/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:32.537 [243/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:32.537 [244/743] Linking target lib/librte_mempool.so.23.0 00:02:32.537 [245/743] Linking target lib/librte_timer.so.23.0 00:02:32.537 [246/743] Linking target lib/librte_acl.so.23.0 00:02:32.537 [247/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:32.537 [248/743] Linking static target lib/librte_bpf.a 00:02:32.537 [249/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:32.537 [250/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:32.796 [251/743] Linking target lib/librte_mbuf.so.23.0 00:02:32.796 [252/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:32.796 [253/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:32.796 [254/743] Linking static target lib/librte_compressdev.a 00:02:32.796 [255/743] Linking target lib/librte_cfgfile.so.23.0 00:02:32.796 [256/743] Generating lib/rte_distributor_def with a custom command 00:02:32.796 [257/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:32.796 [258/743] Generating lib/rte_distributor_mingw with a custom command 00:02:32.796 [259/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:32.796 [260/743] Generating lib/rte_efd_def with a custom command 00:02:32.796 [261/743] Linking target lib/librte_net.so.23.0 00:02:32.796 [262/743] Linking target lib/librte_bbdev.so.23.0 00:02:32.796 [263/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.796 [264/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:32.796 [265/743] Generating lib/rte_efd_mingw with a custom command 00:02:33.054 [266/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:33.054 [267/743] Linking target lib/librte_cmdline.so.23.0 00:02:33.055 [268/743] Linking target lib/librte_hash.so.23.0 00:02:33.313 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:33.313 [270/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:33.313 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:33.313 [272/743] Linking static target lib/librte_distributor.a 00:02:33.572 [273/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.572 [274/743] Linking target lib/librte_ethdev.so.23.0 00:02:33.572 [275/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:33.572 [276/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.572 [277/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.572 [278/743] Linking target lib/librte_compressdev.so.23.0 00:02:33.572 [279/743] Linking target lib/librte_distributor.so.23.0 00:02:33.572 [280/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:33.572 [281/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:33.572 [282/743] Generating lib/rte_eventdev_def with a custom command 00:02:33.830 [283/743] Linking target lib/librte_metrics.so.23.0 00:02:33.830 [284/743] Linking target lib/librte_bpf.so.23.0 00:02:33.830 [285/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:33.830 [286/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:33.830 [287/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:33.830 [288/743] Linking target lib/librte_bitratestats.so.23.0 00:02:33.830 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:34.088 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:34.346 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:34.605 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:34.605 [293/743] Linking static target lib/librte_efd.a 00:02:34.605 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:34.605 [295/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:34.605 [296/743] Linking static target lib/librte_cryptodev.a 00:02:34.863 [297/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.863 [298/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:34.863 [299/743] Linking target lib/librte_efd.so.23.0 00:02:34.863 [300/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:34.863 [301/743] Linking static target lib/librte_gpudev.a 00:02:34.863 [302/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:34.863 [303/743] Generating lib/rte_gro_def with a custom command 00:02:35.123 [304/743] Generating lib/rte_gro_mingw with a custom command 00:02:35.123 [305/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:35.123 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:35.400 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:35.400 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:35.665 [309/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:35.665 [310/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:35.665 [311/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:35.665 [312/743] Linking static target lib/librte_gro.a 00:02:35.665 [313/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.665 [314/743] Generating lib/rte_gso_def with a custom command 00:02:35.665 [315/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:35.665 [316/743] Generating lib/rte_gso_mingw with a custom command 00:02:35.924 [317/743] Linking target lib/librte_gpudev.so.23.0 00:02:35.924 [318/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.924 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:35.924 [320/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:35.924 [321/743] Linking target lib/librte_gro.so.23.0 00:02:36.183 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:36.183 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:36.183 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:36.441 [325/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:36.441 [326/743] Linking static target lib/librte_jobstats.a 00:02:36.441 [327/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:36.441 [328/743] Linking static target lib/librte_gso.a 00:02:36.441 [329/743] Generating lib/rte_jobstats_def with a custom command 00:02:36.441 [330/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:36.441 [331/743] Linking static target lib/librte_eventdev.a 00:02:36.441 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:36.441 [333/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:36.441 [334/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.700 [335/743] Linking target lib/librte_gso.so.23.0 00:02:36.700 [336/743] Generating lib/rte_latencystats_def with a custom command 00:02:36.700 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:36.700 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:36.700 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:36.700 [340/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.700 [341/743] Generating lib/rte_lpm_def with a custom command 00:02:36.700 [342/743] Linking target lib/librte_jobstats.so.23.0 00:02:36.700 [343/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:36.700 [344/743] Generating lib/rte_lpm_mingw with a custom command 00:02:36.958 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:36.958 [346/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.958 [347/743] Linking target lib/librte_cryptodev.so.23.0 00:02:36.958 [348/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:36.958 [349/743] Linking static target lib/librte_ip_frag.a 00:02:36.958 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:37.217 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.217 [352/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:37.217 [353/743] Linking static target lib/librte_latencystats.a 00:02:37.217 [354/743] Linking target lib/librte_ip_frag.so.23.0 00:02:37.477 [355/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:37.477 [356/743] Generating lib/rte_member_def with a custom command 00:02:37.477 [357/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:37.477 [358/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.477 [359/743] Generating lib/rte_member_mingw with a custom command 00:02:37.477 [360/743] Linking target lib/librte_latencystats.so.23.0 00:02:37.477 [361/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:37.736 [362/743] Generating lib/rte_pcapng_def with a custom command 00:02:37.736 [363/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:37.736 [364/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:37.736 [365/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:37.736 [366/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:37.736 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:37.736 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:37.736 [369/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:37.994 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:38.253 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:38.253 [372/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:38.253 [373/743] Linking static target lib/librte_lpm.a 00:02:38.253 [374/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:38.253 [375/743] Generating lib/rte_power_def with a custom command 00:02:38.253 [376/743] Generating lib/rte_power_mingw with a custom command 00:02:38.511 [377/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.511 [378/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:38.511 [379/743] Generating lib/rte_rawdev_def with a custom command 00:02:38.511 [380/743] Linking target lib/librte_eventdev.so.23.0 00:02:38.511 [381/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:38.511 [382/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:38.511 [383/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.511 [384/743] Linking static target lib/librte_pcapng.a 00:02:38.511 [385/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:38.511 [386/743] Linking target lib/librte_lpm.so.23.0 00:02:38.769 [387/743] Generating lib/rte_regexdev_def with a custom command 00:02:38.769 [388/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:38.769 [389/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:38.769 [390/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:38.769 [391/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:38.769 [392/743] Generating lib/rte_dmadev_def with a custom command 00:02:38.769 [393/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:38.769 [394/743] Generating lib/rte_rib_def with a custom command 00:02:38.769 [395/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:38.769 [396/743] Linking static target lib/librte_rawdev.a 00:02:38.769 [397/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:38.769 [398/743] Generating lib/rte_rib_mingw with a custom command 00:02:38.769 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:38.769 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:02:38.769 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.027 [402/743] Linking target lib/librte_pcapng.so.23.0 00:02:39.027 [403/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:39.027 [404/743] Linking static target lib/librte_power.a 00:02:39.027 [405/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:39.027 [406/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:39.027 [407/743] Linking static target lib/librte_dmadev.a 00:02:39.286 [408/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:39.286 [409/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.286 [410/743] Linking target lib/librte_rawdev.so.23.0 00:02:39.286 [411/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:39.286 [412/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:39.286 [413/743] Generating lib/rte_sched_def with a custom command 00:02:39.286 [414/743] Generating lib/rte_sched_mingw with a custom command 00:02:39.286 [415/743] Generating lib/rte_security_def with a custom command 00:02:39.286 [416/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:39.286 [417/743] Linking static target lib/librte_regexdev.a 00:02:39.544 [418/743] Generating lib/rte_security_mingw with a custom command 00:02:39.544 [419/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:39.544 [420/743] Linking static target lib/librte_member.a 00:02:39.544 [421/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:39.544 [422/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:39.544 [423/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:39.544 [424/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.544 [425/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:39.544 [426/743] Linking static target lib/librte_reorder.a 00:02:39.809 [427/743] Generating lib/rte_stack_def with a custom command 00:02:39.809 [428/743] Linking target lib/librte_dmadev.so.23.0 00:02:39.809 [429/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:39.809 [430/743] Linking static target lib/librte_stack.a 00:02:39.809 [431/743] Generating lib/rte_stack_mingw with a custom command 00:02:39.809 [432/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:39.809 [433/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.809 [434/743] Linking target lib/librte_member.so.23.0 00:02:39.809 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:39.809 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.809 [437/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.809 [438/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:39.809 [439/743] Linking static target lib/librte_rib.a 00:02:40.074 [440/743] Linking target lib/librte_stack.so.23.0 00:02:40.074 [441/743] Linking target lib/librte_reorder.so.23.0 00:02:40.074 [442/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.074 [443/743] Linking target lib/librte_power.so.23.0 00:02:40.074 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.332 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:40.332 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:40.332 [447/743] Linking static target lib/librte_security.a 00:02:40.332 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.332 [449/743] Linking target lib/librte_rib.so.23.0 00:02:40.590 [450/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:40.590 [451/743] Generating lib/rte_vhost_def with a custom command 00:02:40.590 [452/743] Generating lib/rte_vhost_mingw with a custom command 00:02:40.590 [453/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:40.590 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:40.590 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.849 [456/743] Linking target lib/librte_security.so.23.0 00:02:40.849 [457/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:40.849 [458/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:40.850 [459/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:40.850 [460/743] Linking static target lib/librte_sched.a 00:02:41.425 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.425 [462/743] Linking target lib/librte_sched.so.23.0 00:02:41.425 [463/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:41.684 [464/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:41.684 [465/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:41.684 [466/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:41.684 [467/743] Generating lib/rte_ipsec_def with a custom command 00:02:41.684 [468/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:41.684 [469/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:41.684 [470/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:41.942 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:42.509 [472/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:42.509 [473/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:42.509 [474/743] Generating lib/rte_fib_def with a custom command 00:02:42.509 [475/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:42.509 [476/743] Generating lib/rte_fib_mingw with a custom command 00:02:42.509 [477/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:42.509 [478/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:42.509 [479/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:43.079 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:43.079 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:43.079 [482/743] Linking static target lib/librte_ipsec.a 00:02:43.646 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.646 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:43.904 [485/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:43.904 [486/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:43.904 [487/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:43.904 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:43.904 [489/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:44.162 [490/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:44.162 [491/743] Linking static target lib/librte_fib.a 00:02:44.419 [492/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:44.677 [493/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.677 [494/743] Linking target lib/librte_fib.so.23.0 00:02:45.244 [495/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:45.244 [496/743] Generating lib/rte_port_def with a custom command 00:02:45.244 [497/743] Generating lib/rte_port_mingw with a custom command 00:02:45.244 [498/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:45.244 [499/743] Generating lib/rte_pdump_def with a custom command 00:02:45.244 [500/743] Generating lib/rte_pdump_mingw with a custom command 00:02:45.502 [501/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:45.502 [502/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:45.502 [503/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:45.502 [504/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:45.760 [505/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:46.018 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:46.018 [507/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:46.018 [508/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:46.584 [509/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:46.584 [510/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:46.842 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:46.842 [512/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:46.842 [513/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:46.842 [514/743] Linking static target lib/librte_pdump.a 00:02:46.842 [515/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:46.842 [516/743] Linking static target lib/librte_port.a 00:02:47.101 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:47.101 [518/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.101 [519/743] Linking target lib/librte_pdump.so.23.0 00:02:47.667 [520/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.667 [521/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:47.667 [522/743] Linking target lib/librte_port.so.23.0 00:02:47.925 [523/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:47.925 [524/743] Generating lib/rte_table_def with a custom command 00:02:47.925 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:47.925 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:48.184 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:48.184 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:48.184 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:48.184 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:48.446 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:48.446 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:48.446 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:48.446 [534/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:48.446 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:48.446 [536/743] Linking static target lib/librte_table.a 00:02:48.704 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:49.271 [538/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:49.271 [539/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:49.271 [540/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.529 [541/743] Linking target lib/librte_table.so.23.0 00:02:49.529 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:49.529 [543/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:49.529 [544/743] Generating lib/rte_graph_def with a custom command 00:02:49.529 [545/743] Generating lib/rte_graph_mingw with a custom command 00:02:49.788 [546/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:49.788 [547/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:50.046 [548/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:50.303 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:50.303 [550/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:50.303 [551/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:50.303 [552/743] Linking static target lib/librte_graph.a 00:02:50.561 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:50.561 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:51.127 [555/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:51.127 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:51.127 [557/743] Generating lib/rte_node_def with a custom command 00:02:51.127 [558/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:51.127 [559/743] Generating lib/rte_node_mingw with a custom command 00:02:51.127 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:51.386 [561/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:51.386 [562/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.386 [563/743] Linking target lib/librte_graph.so.23.0 00:02:51.644 [564/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:51.644 [565/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:51.644 [566/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:51.644 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:51.644 [568/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:51.644 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:51.644 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:51.644 [571/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:51.644 [572/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:51.644 [573/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:51.902 [574/743] Linking static target lib/librte_node.a 00:02:51.902 [575/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:51.902 [576/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:51.902 [577/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:51.902 [578/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:51.902 [579/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:51.902 [580/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:52.161 [581/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.161 [582/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:52.161 [583/743] Linking target lib/librte_node.so.23.0 00:02:52.161 [584/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:52.161 [585/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:52.161 [586/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:52.419 [587/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:52.419 [588/743] Linking static target drivers/librte_bus_vdev.a 00:02:52.419 [589/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:52.419 [590/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:52.419 [591/743] Linking static target drivers/librte_bus_pci.a 00:02:52.678 [592/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.678 [593/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:52.678 [594/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:52.678 [595/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:52.678 [596/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:52.936 [597/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:52.936 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:52.936 [599/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.936 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:52.936 [601/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:53.193 [602/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:53.193 [603/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:53.194 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:53.452 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:53.452 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.452 [607/743] Linking static target drivers/librte_mempool_ring.a 00:02:53.452 [608/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:53.452 [609/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:54.018 [610/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:54.018 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:54.276 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:54.276 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:54.276 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:55.211 [615/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:55.211 [616/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:55.777 [617/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:55.777 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:55.777 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:56.035 [620/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:56.602 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:56.602 [622/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:56.602 [623/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:56.602 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:56.602 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:57.537 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:57.796 [627/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:57.796 [628/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:58.054 [629/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:58.054 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:58.054 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:58.054 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:58.054 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:58.313 [634/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:58.313 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:58.572 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:59.140 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:59.140 [638/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:59.417 [639/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:59.675 [640/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:59.675 [641/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:59.933 [642/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:00.191 [643/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:00.191 [644/743] Linking static target lib/librte_vhost.a 00:03:00.191 [645/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:00.191 [646/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:00.450 [647/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:00.709 [648/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:00.709 [649/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:00.709 [650/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:00.968 [651/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:01.226 [652/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:01.226 [653/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:01.226 [654/743] Linking static target drivers/librte_net_i40e.a 00:03:01.485 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:01.485 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:01.485 [657/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:01.485 [658/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:01.743 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:01.743 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:01.743 [661/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:01.743 [662/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.743 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:02.002 [664/743] Linking target lib/librte_vhost.so.23.0 00:03:02.002 [665/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.002 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:02.260 [667/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:02.260 [668/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:02.260 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:02.518 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:02.776 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:03.035 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:03.035 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:03.971 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:03.971 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:03.971 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:04.229 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:04.488 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:04.488 [679/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:04.746 [680/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:04.746 [681/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:04.746 [682/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:05.004 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:05.262 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:05.262 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:05.521 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:05.521 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:05.779 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:05.779 [689/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:05.779 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:05.779 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:06.052 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:06.328 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:06.586 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:06.586 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:06.586 [696/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:06.844 [697/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:07.101 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:07.101 [699/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:07.360 [700/743] Linking static target lib/librte_pipeline.a 00:03:07.618 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:07.618 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:07.618 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:08.185 [704/743] Linking target app/dpdk-proc-info 00:03:08.185 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:08.185 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:08.185 [707/743] Linking target app/dpdk-dumpcap 00:03:08.442 [708/743] Linking target app/dpdk-pdump 00:03:08.700 [709/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:08.959 [710/743] Linking target app/dpdk-test-acl 00:03:08.959 [711/743] Linking target app/dpdk-test-cmdline 00:03:08.959 [712/743] Linking target app/dpdk-test-bbdev 00:03:08.959 [713/743] Linking target app/dpdk-test-compress-perf 00:03:09.217 [714/743] Linking target app/dpdk-test-crypto-perf 00:03:09.217 [715/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:09.217 [716/743] Linking target app/dpdk-test-eventdev 00:03:09.476 [717/743] Linking target app/dpdk-test-flow-perf 00:03:09.737 [718/743] Linking target app/dpdk-test-fib 00:03:09.737 [719/743] Linking target app/dpdk-test-gpudev 00:03:09.737 [720/743] Linking target app/dpdk-test-pipeline 00:03:10.306 [721/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:10.306 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:10.564 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:10.823 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:10.823 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:11.081 [726/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.081 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:11.081 [728/743] Linking target lib/librte_pipeline.so.23.0 00:03:11.647 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:11.647 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:11.906 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:11.906 [732/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:11.906 [733/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:12.474 [734/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:12.474 [735/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:12.474 [736/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:12.474 [737/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:12.474 [738/743] Linking target app/dpdk-test-sad 00:03:12.732 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:12.994 [740/743] Linking target app/dpdk-test-regex 00:03:13.258 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:13.258 [742/743] Linking target app/dpdk-testpmd 00:03:13.826 [743/743] Linking target app/dpdk-test-security-perf 00:03:13.826 10:03:33 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:13.826 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:13.826 [0/1] Installing files. 00:03:14.085 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:14.085 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:14.086 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:14.349 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:14.349 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:14.349 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:14.349 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.350 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:14.351 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.352 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:14.353 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:14.353 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.353 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.353 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.353 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.353 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.353 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.353 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.353 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.353 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.353 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.353 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.353 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.353 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.353 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.353 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:14.614 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:14.614 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:14.614 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:14.614 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:14.614 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.614 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.614 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.614 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.614 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.614 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.614 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.614 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.614 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.614 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.615 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.616 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.876 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.877 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.878 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.878 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.878 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:14.878 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:14.878 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:14.878 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:14.878 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:14.878 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:14.878 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:14.878 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:14.878 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:14.878 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:14.878 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:14.878 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:14.878 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:14.878 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:14.878 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:14.878 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:14.878 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:14.878 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:14.878 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:14.878 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:14.878 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:14.878 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:14.878 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:14.878 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:14.878 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:14.878 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:14.878 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:14.878 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:14.878 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:14.878 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:14.878 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:14.878 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:14.878 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:14.878 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:14.878 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:14.878 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:14.878 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:14.878 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:14.878 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:14.878 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:14.878 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:14.878 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:14.878 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:14.878 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:14.878 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:14.878 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:14.878 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:14.878 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:14.878 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:14.878 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:14.878 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:14.878 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:14.878 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:14.878 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:14.878 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:14.878 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:14.878 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:14.878 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:14.878 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:14.878 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:14.878 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:14.878 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:14.878 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:14.878 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:14.878 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:14.878 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:14.878 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:14.878 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:14.878 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:14.878 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:14.878 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:14.878 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:14.878 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:14.878 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:14.878 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:14.878 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:14.878 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:14.878 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:14.878 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:14.878 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:14.878 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:14.878 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:14.878 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:14.878 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:14.878 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:14.878 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:14.878 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:14.878 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:14.878 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:14.878 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:14.878 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:14.878 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:14.878 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:14.878 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:14.878 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:14.878 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:14.878 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:14.878 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:14.878 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:14.878 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:14.878 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:14.878 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:14.878 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:14.878 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:14.878 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:14.878 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:14.878 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:14.879 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:14.879 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:14.879 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:14.879 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:14.879 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:14.879 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:14.879 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:14.879 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:14.879 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:14.879 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:14.879 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:14.879 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:14.879 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:14.879 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:14.879 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:14.879 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:14.879 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:14.879 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:14.879 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:14.879 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:14.879 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:14.879 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:14.879 10:03:34 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:14.879 10:03:34 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:14.879 10:03:34 -- common/autobuild_common.sh@203 -- $ cat 00:03:14.879 10:03:34 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:14.879 00:03:14.879 real 1m0.427s 00:03:14.879 user 7m17.968s 00:03:14.879 sys 1m3.634s 00:03:14.879 10:03:34 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:14.879 ************************************ 00:03:14.879 END TEST build_native_dpdk 00:03:14.879 ************************************ 00:03:14.879 10:03:34 -- common/autotest_common.sh@10 -- $ set +x 00:03:14.879 10:03:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:14.879 10:03:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:14.879 10:03:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:14.879 10:03:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:14.879 10:03:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:14.879 10:03:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:14.879 10:03:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:14.879 10:03:34 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:15.138 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:15.138 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:15.138 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:15.138 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:15.703 Using 'verbs' RDMA provider 00:03:28.516 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:40.717 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:40.717 go version go1.21.1 linux/amd64 00:03:40.717 Creating mk/config.mk...done. 00:03:40.717 Creating mk/cc.flags.mk...done. 00:03:40.717 Type 'make' to build. 00:03:40.717 10:03:59 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:40.717 10:03:59 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:03:40.717 10:03:59 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:03:40.717 10:03:59 -- common/autotest_common.sh@10 -- $ set +x 00:03:40.717 ************************************ 00:03:40.717 START TEST make 00:03:40.717 ************************************ 00:03:40.717 10:03:59 -- common/autotest_common.sh@1114 -- $ make -j10 00:03:40.717 make[1]: Nothing to be done for 'all'. 00:04:19.424 CC lib/ut/ut.o 00:04:19.424 CC lib/ut_mock/mock.o 00:04:19.424 CC lib/log/log.o 00:04:19.424 CC lib/log/log_flags.o 00:04:19.424 CC lib/log/log_deprecated.o 00:04:19.424 LIB libspdk_ut_mock.a 00:04:19.424 SO libspdk_ut_mock.so.5.0 00:04:19.424 SYMLINK libspdk_ut_mock.so 00:04:19.424 LIB libspdk_log.a 00:04:19.424 LIB libspdk_ut.a 00:04:19.424 SO libspdk_log.so.6.1 00:04:19.424 SO libspdk_ut.so.1.0 00:04:19.424 SYMLINK libspdk_log.so 00:04:19.424 SYMLINK libspdk_ut.so 00:04:19.424 CC lib/dma/dma.o 00:04:19.424 CXX lib/trace_parser/trace.o 00:04:19.424 CC lib/ioat/ioat.o 00:04:19.424 CC lib/util/base64.o 00:04:19.424 CC lib/util/cpuset.o 00:04:19.424 CC lib/util/bit_array.o 00:04:19.424 CC lib/util/crc16.o 00:04:19.424 CC lib/util/crc32.o 00:04:19.424 CC lib/util/crc32c.o 00:04:19.424 CC lib/vfio_user/host/vfio_user_pci.o 00:04:19.424 CC lib/util/crc32_ieee.o 00:04:19.424 CC lib/util/crc64.o 00:04:19.424 CC lib/util/dif.o 00:04:19.424 CC lib/util/fd.o 00:04:19.424 LIB libspdk_dma.a 00:04:19.424 SO libspdk_dma.so.3.0 00:04:19.424 CC lib/util/file.o 00:04:19.424 CC lib/util/hexlify.o 00:04:19.424 SYMLINK libspdk_dma.so 00:04:19.424 CC lib/util/iov.o 00:04:19.424 CC lib/vfio_user/host/vfio_user.o 00:04:19.424 CC lib/util/math.o 00:04:19.424 CC lib/util/pipe.o 00:04:19.424 LIB libspdk_ioat.a 00:04:19.424 SO libspdk_ioat.so.6.0 00:04:19.424 CC lib/util/strerror_tls.o 00:04:19.424 CC lib/util/string.o 00:04:19.424 SYMLINK libspdk_ioat.so 00:04:19.424 CC lib/util/uuid.o 00:04:19.424 CC lib/util/fd_group.o 00:04:19.424 CC lib/util/xor.o 00:04:19.424 CC lib/util/zipf.o 00:04:19.424 LIB libspdk_vfio_user.a 00:04:19.424 SO libspdk_vfio_user.so.4.0 00:04:19.425 SYMLINK libspdk_vfio_user.so 00:04:19.425 LIB libspdk_util.a 00:04:19.425 SO libspdk_util.so.8.0 00:04:19.425 SYMLINK libspdk_util.so 00:04:19.425 LIB libspdk_trace_parser.a 00:04:19.425 SO libspdk_trace_parser.so.4.0 00:04:19.425 CC lib/conf/conf.o 00:04:19.425 CC lib/env_dpdk/env.o 00:04:19.425 CC lib/env_dpdk/memory.o 00:04:19.425 CC lib/rdma/common.o 00:04:19.425 CC lib/env_dpdk/pci.o 00:04:19.425 CC lib/rdma/rdma_verbs.o 00:04:19.425 CC lib/idxd/idxd.o 00:04:19.425 CC lib/json/json_parse.o 00:04:19.425 CC lib/vmd/vmd.o 00:04:19.425 SYMLINK libspdk_trace_parser.so 00:04:19.425 CC lib/env_dpdk/init.o 00:04:19.682 LIB libspdk_conf.a 00:04:19.682 SO libspdk_conf.so.5.0 00:04:19.940 CC lib/env_dpdk/threads.o 00:04:19.940 CC lib/json/json_util.o 00:04:19.940 SYMLINK libspdk_conf.so 00:04:19.940 CC lib/vmd/led.o 00:04:19.940 LIB libspdk_rdma.a 00:04:19.940 CC lib/json/json_write.o 00:04:19.940 SO libspdk_rdma.so.5.0 00:04:19.940 CC lib/env_dpdk/pci_ioat.o 00:04:20.199 CC lib/env_dpdk/pci_virtio.o 00:04:20.199 SYMLINK libspdk_rdma.so 00:04:20.199 CC lib/idxd/idxd_user.o 00:04:20.199 CC lib/env_dpdk/pci_vmd.o 00:04:20.199 CC lib/env_dpdk/pci_idxd.o 00:04:20.199 CC lib/idxd/idxd_kernel.o 00:04:20.199 CC lib/env_dpdk/pci_event.o 00:04:20.199 CC lib/env_dpdk/sigbus_handler.o 00:04:20.457 CC lib/env_dpdk/pci_dpdk.o 00:04:20.457 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:20.457 LIB libspdk_vmd.a 00:04:20.457 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:20.457 LIB libspdk_json.a 00:04:20.457 SO libspdk_vmd.so.5.0 00:04:20.457 SO libspdk_json.so.5.1 00:04:20.457 LIB libspdk_idxd.a 00:04:20.457 SYMLINK libspdk_vmd.so 00:04:20.716 SO libspdk_idxd.so.11.0 00:04:20.716 SYMLINK libspdk_json.so 00:04:20.716 SYMLINK libspdk_idxd.so 00:04:20.716 CC lib/jsonrpc/jsonrpc_server.o 00:04:20.716 CC lib/jsonrpc/jsonrpc_client.o 00:04:20.716 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:20.716 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:21.306 LIB libspdk_jsonrpc.a 00:04:21.306 SO libspdk_jsonrpc.so.5.1 00:04:21.306 SYMLINK libspdk_jsonrpc.so 00:04:21.565 CC lib/rpc/rpc.o 00:04:21.823 LIB libspdk_env_dpdk.a 00:04:21.823 SO libspdk_env_dpdk.so.13.0 00:04:21.823 LIB libspdk_rpc.a 00:04:21.823 SO libspdk_rpc.so.5.0 00:04:21.823 SYMLINK libspdk_rpc.so 00:04:22.081 SYMLINK libspdk_env_dpdk.so 00:04:22.081 CC lib/trace/trace.o 00:04:22.081 CC lib/trace/trace_flags.o 00:04:22.081 CC lib/trace/trace_rpc.o 00:04:22.081 CC lib/sock/sock.o 00:04:22.081 CC lib/sock/sock_rpc.o 00:04:22.081 CC lib/notify/notify.o 00:04:22.081 CC lib/notify/notify_rpc.o 00:04:22.339 LIB libspdk_notify.a 00:04:22.339 SO libspdk_notify.so.5.0 00:04:22.339 SYMLINK libspdk_notify.so 00:04:22.598 LIB libspdk_trace.a 00:04:22.598 SO libspdk_trace.so.9.0 00:04:22.598 SYMLINK libspdk_trace.so 00:04:22.598 LIB libspdk_sock.a 00:04:22.598 SO libspdk_sock.so.8.0 00:04:22.858 CC lib/thread/thread.o 00:04:22.858 CC lib/thread/iobuf.o 00:04:22.858 SYMLINK libspdk_sock.so 00:04:22.858 CC lib/nvme/nvme_ctrlr.o 00:04:22.858 CC lib/nvme/nvme_ns_cmd.o 00:04:22.858 CC lib/nvme/nvme_ns.o 00:04:22.858 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:22.858 CC lib/nvme/nvme_pcie_common.o 00:04:22.858 CC lib/nvme/nvme_fabric.o 00:04:22.858 CC lib/nvme/nvme_pcie.o 00:04:22.858 CC lib/nvme/nvme_qpair.o 00:04:23.425 CC lib/nvme/nvme.o 00:04:24.361 CC lib/nvme/nvme_quirks.o 00:04:24.361 CC lib/nvme/nvme_transport.o 00:04:24.361 CC lib/nvme/nvme_discovery.o 00:04:24.361 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:24.361 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:24.361 CC lib/nvme/nvme_tcp.o 00:04:24.928 CC lib/nvme/nvme_opal.o 00:04:24.928 CC lib/nvme/nvme_io_msg.o 00:04:25.186 CC lib/nvme/nvme_poll_group.o 00:04:25.444 CC lib/nvme/nvme_zns.o 00:04:25.444 CC lib/nvme/nvme_cuse.o 00:04:25.444 CC lib/nvme/nvme_vfio_user.o 00:04:25.702 LIB libspdk_thread.a 00:04:25.702 CC lib/nvme/nvme_rdma.o 00:04:25.702 SO libspdk_thread.so.9.0 00:04:25.702 SYMLINK libspdk_thread.so 00:04:25.960 CC lib/blob/blobstore.o 00:04:25.960 CC lib/accel/accel.o 00:04:25.960 CC lib/accel/accel_rpc.o 00:04:26.217 CC lib/accel/accel_sw.o 00:04:26.474 CC lib/init/json_config.o 00:04:26.474 CC lib/init/subsystem.o 00:04:26.474 CC lib/virtio/virtio.o 00:04:26.474 CC lib/virtio/virtio_vhost_user.o 00:04:26.731 CC lib/init/subsystem_rpc.o 00:04:26.731 CC lib/init/rpc.o 00:04:26.731 CC lib/virtio/virtio_vfio_user.o 00:04:26.988 CC lib/blob/request.o 00:04:26.988 CC lib/blob/zeroes.o 00:04:26.988 CC lib/virtio/virtio_pci.o 00:04:26.988 LIB libspdk_init.a 00:04:26.988 CC lib/blob/blob_bs_dev.o 00:04:27.244 SO libspdk_init.so.4.0 00:04:27.244 SYMLINK libspdk_init.so 00:04:27.501 LIB libspdk_virtio.a 00:04:27.501 CC lib/event/app.o 00:04:27.501 CC lib/event/reactor.o 00:04:27.501 CC lib/event/log_rpc.o 00:04:27.501 CC lib/event/app_rpc.o 00:04:27.501 CC lib/event/scheduler_static.o 00:04:27.501 SO libspdk_virtio.so.6.0 00:04:27.501 SYMLINK libspdk_virtio.so 00:04:27.758 LIB libspdk_accel.a 00:04:27.758 SO libspdk_accel.so.14.0 00:04:27.758 SYMLINK libspdk_accel.so 00:04:28.014 LIB libspdk_nvme.a 00:04:28.014 CC lib/bdev/bdev.o 00:04:28.014 CC lib/bdev/bdev_rpc.o 00:04:28.014 CC lib/bdev/bdev_zone.o 00:04:28.014 CC lib/bdev/part.o 00:04:28.014 CC lib/bdev/scsi_nvme.o 00:04:28.014 LIB libspdk_event.a 00:04:28.014 SO libspdk_event.so.12.0 00:04:28.274 SO libspdk_nvme.so.12.0 00:04:28.274 SYMLINK libspdk_event.so 00:04:28.532 SYMLINK libspdk_nvme.so 00:04:29.905 LIB libspdk_blob.a 00:04:30.162 SO libspdk_blob.so.10.1 00:04:30.162 SYMLINK libspdk_blob.so 00:04:30.419 CC lib/blobfs/blobfs.o 00:04:30.419 CC lib/lvol/lvol.o 00:04:30.419 CC lib/blobfs/tree.o 00:04:31.354 LIB libspdk_bdev.a 00:04:31.354 LIB libspdk_blobfs.a 00:04:31.354 SO libspdk_bdev.so.14.0 00:04:31.354 SO libspdk_blobfs.so.9.0 00:04:31.612 SYMLINK libspdk_blobfs.so 00:04:31.612 LIB libspdk_lvol.a 00:04:31.612 SYMLINK libspdk_bdev.so 00:04:31.612 SO libspdk_lvol.so.9.1 00:04:31.612 SYMLINK libspdk_lvol.so 00:04:31.612 CC lib/nvmf/ctrlr.o 00:04:31.612 CC lib/nvmf/ctrlr_discovery.o 00:04:31.612 CC lib/nvmf/ctrlr_bdev.o 00:04:31.612 CC lib/nvmf/subsystem.o 00:04:31.612 CC lib/scsi/dev.o 00:04:31.612 CC lib/scsi/lun.o 00:04:31.612 CC lib/scsi/port.o 00:04:31.612 CC lib/ftl/ftl_core.o 00:04:31.612 CC lib/ublk/ublk.o 00:04:31.612 CC lib/nbd/nbd.o 00:04:31.870 CC lib/nbd/nbd_rpc.o 00:04:32.127 CC lib/ftl/ftl_init.o 00:04:32.127 CC lib/scsi/scsi.o 00:04:32.127 CC lib/ftl/ftl_layout.o 00:04:32.385 CC lib/ftl/ftl_debug.o 00:04:32.385 CC lib/ublk/ublk_rpc.o 00:04:32.385 LIB libspdk_nbd.a 00:04:32.385 CC lib/scsi/scsi_bdev.o 00:04:32.385 SO libspdk_nbd.so.6.0 00:04:32.385 CC lib/scsi/scsi_pr.o 00:04:32.643 CC lib/nvmf/nvmf.o 00:04:32.643 SYMLINK libspdk_nbd.so 00:04:32.643 CC lib/ftl/ftl_io.o 00:04:32.643 CC lib/ftl/ftl_sb.o 00:04:32.643 LIB libspdk_ublk.a 00:04:32.643 CC lib/ftl/ftl_l2p.o 00:04:32.643 SO libspdk_ublk.so.2.0 00:04:32.901 CC lib/nvmf/nvmf_rpc.o 00:04:32.901 SYMLINK libspdk_ublk.so 00:04:32.901 CC lib/ftl/ftl_l2p_flat.o 00:04:32.901 CC lib/ftl/ftl_nv_cache.o 00:04:32.901 CC lib/ftl/ftl_band.o 00:04:32.901 CC lib/ftl/ftl_band_ops.o 00:04:32.901 CC lib/scsi/scsi_rpc.o 00:04:33.158 CC lib/ftl/ftl_writer.o 00:04:33.158 CC lib/scsi/task.o 00:04:33.158 CC lib/ftl/ftl_rq.o 00:04:33.416 CC lib/ftl/ftl_reloc.o 00:04:33.416 LIB libspdk_scsi.a 00:04:33.416 CC lib/ftl/ftl_l2p_cache.o 00:04:33.674 CC lib/nvmf/transport.o 00:04:33.674 CC lib/nvmf/tcp.o 00:04:33.674 SO libspdk_scsi.so.8.0 00:04:33.674 CC lib/ftl/ftl_p2l.o 00:04:33.674 SYMLINK libspdk_scsi.so 00:04:33.674 CC lib/ftl/mngt/ftl_mngt.o 00:04:33.674 CC lib/nvmf/rdma.o 00:04:33.932 CC lib/iscsi/conn.o 00:04:33.932 CC lib/iscsi/init_grp.o 00:04:34.190 CC lib/iscsi/iscsi.o 00:04:34.190 CC lib/vhost/vhost.o 00:04:34.190 CC lib/vhost/vhost_rpc.o 00:04:34.449 CC lib/vhost/vhost_scsi.o 00:04:34.449 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:34.449 CC lib/vhost/vhost_blk.o 00:04:34.707 CC lib/vhost/rte_vhost_user.o 00:04:34.964 CC lib/iscsi/md5.o 00:04:34.964 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:34.964 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:34.964 CC lib/iscsi/param.o 00:04:35.223 CC lib/iscsi/portal_grp.o 00:04:35.223 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:35.481 CC lib/iscsi/tgt_node.o 00:04:35.481 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:35.481 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:35.481 CC lib/iscsi/iscsi_subsystem.o 00:04:35.739 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:35.996 CC lib/iscsi/iscsi_rpc.o 00:04:35.996 CC lib/iscsi/task.o 00:04:35.996 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:36.254 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:36.254 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:36.254 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:36.254 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:36.254 CC lib/ftl/utils/ftl_conf.o 00:04:36.512 CC lib/ftl/utils/ftl_md.o 00:04:36.512 CC lib/ftl/utils/ftl_mempool.o 00:04:36.512 CC lib/ftl/utils/ftl_bitmap.o 00:04:36.769 LIB libspdk_iscsi.a 00:04:36.769 LIB libspdk_vhost.a 00:04:36.769 CC lib/ftl/utils/ftl_property.o 00:04:36.769 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:36.769 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:36.769 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:36.769 SO libspdk_iscsi.so.7.0 00:04:36.769 SO libspdk_vhost.so.7.1 00:04:36.769 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:37.026 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:37.026 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:37.026 SYMLINK libspdk_vhost.so 00:04:37.026 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:37.026 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:37.026 SYMLINK libspdk_iscsi.so 00:04:37.026 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:37.026 LIB libspdk_nvmf.a 00:04:37.026 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:37.026 CC lib/ftl/base/ftl_base_dev.o 00:04:37.026 CC lib/ftl/base/ftl_base_bdev.o 00:04:37.284 CC lib/ftl/ftl_trace.o 00:04:37.284 SO libspdk_nvmf.so.17.0 00:04:37.541 SYMLINK libspdk_nvmf.so 00:04:37.541 LIB libspdk_ftl.a 00:04:37.799 SO libspdk_ftl.so.8.0 00:04:38.057 SYMLINK libspdk_ftl.so 00:04:38.315 CC module/env_dpdk/env_dpdk_rpc.o 00:04:38.315 CC module/accel/error/accel_error.o 00:04:38.315 CC module/blob/bdev/blob_bdev.o 00:04:38.315 CC module/scheduler/gscheduler/gscheduler.o 00:04:38.315 CC module/accel/dsa/accel_dsa.o 00:04:38.315 CC module/sock/posix/posix.o 00:04:38.315 CC module/accel/ioat/accel_ioat.o 00:04:38.315 CC module/accel/iaa/accel_iaa.o 00:04:38.573 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:38.573 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:38.573 LIB libspdk_env_dpdk_rpc.a 00:04:38.573 SO libspdk_env_dpdk_rpc.so.5.0 00:04:38.573 SYMLINK libspdk_env_dpdk_rpc.so 00:04:38.573 CC module/accel/iaa/accel_iaa_rpc.o 00:04:38.573 LIB libspdk_scheduler_dynamic.a 00:04:38.573 LIB libspdk_scheduler_gscheduler.a 00:04:38.573 SO libspdk_scheduler_dynamic.so.3.0 00:04:38.831 SO libspdk_scheduler_gscheduler.so.3.0 00:04:38.831 LIB libspdk_scheduler_dpdk_governor.a 00:04:38.831 CC module/accel/error/accel_error_rpc.o 00:04:38.831 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:38.831 CC module/accel/ioat/accel_ioat_rpc.o 00:04:38.831 SYMLINK libspdk_scheduler_dynamic.so 00:04:38.831 CC module/accel/dsa/accel_dsa_rpc.o 00:04:38.831 SYMLINK libspdk_scheduler_gscheduler.so 00:04:38.831 LIB libspdk_accel_iaa.a 00:04:38.831 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:38.831 LIB libspdk_blob_bdev.a 00:04:38.831 SO libspdk_accel_iaa.so.2.0 00:04:38.831 SO libspdk_blob_bdev.so.10.1 00:04:38.831 SYMLINK libspdk_accel_iaa.so 00:04:38.831 SYMLINK libspdk_blob_bdev.so 00:04:39.090 LIB libspdk_accel_dsa.a 00:04:39.090 LIB libspdk_accel_error.a 00:04:39.090 SO libspdk_accel_dsa.so.4.0 00:04:39.090 LIB libspdk_accel_ioat.a 00:04:39.090 SO libspdk_accel_error.so.1.0 00:04:39.090 SO libspdk_accel_ioat.so.5.0 00:04:39.090 SYMLINK libspdk_accel_dsa.so 00:04:39.090 SYMLINK libspdk_accel_error.so 00:04:39.090 SYMLINK libspdk_accel_ioat.so 00:04:39.090 CC module/bdev/gpt/gpt.o 00:04:39.090 CC module/bdev/error/vbdev_error.o 00:04:39.090 CC module/blobfs/bdev/blobfs_bdev.o 00:04:39.348 CC module/bdev/delay/vbdev_delay.o 00:04:39.348 CC module/bdev/malloc/bdev_malloc.o 00:04:39.348 CC module/bdev/lvol/vbdev_lvol.o 00:04:39.348 CC module/bdev/null/bdev_null.o 00:04:39.348 CC module/bdev/nvme/bdev_nvme.o 00:04:39.348 CC module/bdev/passthru/vbdev_passthru.o 00:04:39.606 CC module/bdev/error/vbdev_error_rpc.o 00:04:39.606 CC module/bdev/gpt/vbdev_gpt.o 00:04:39.606 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:39.606 LIB libspdk_sock_posix.a 00:04:39.606 CC module/bdev/null/bdev_null_rpc.o 00:04:39.864 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:39.864 SO libspdk_sock_posix.so.5.0 00:04:39.864 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:39.864 LIB libspdk_bdev_error.a 00:04:39.864 LIB libspdk_blobfs_bdev.a 00:04:39.864 SO libspdk_bdev_error.so.5.0 00:04:39.864 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:39.864 SYMLINK libspdk_sock_posix.so 00:04:39.864 SO libspdk_blobfs_bdev.so.5.0 00:04:39.864 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:39.864 SYMLINK libspdk_bdev_error.so 00:04:39.864 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:39.864 SYMLINK libspdk_blobfs_bdev.so 00:04:40.121 LIB libspdk_bdev_gpt.a 00:04:40.121 LIB libspdk_bdev_null.a 00:04:40.121 LIB libspdk_bdev_delay.a 00:04:40.121 LIB libspdk_bdev_malloc.a 00:04:40.121 SO libspdk_bdev_null.so.5.0 00:04:40.121 SO libspdk_bdev_gpt.so.5.0 00:04:40.121 SO libspdk_bdev_delay.so.5.0 00:04:40.121 CC module/bdev/nvme/nvme_rpc.o 00:04:40.121 SO libspdk_bdev_malloc.so.5.0 00:04:40.121 LIB libspdk_bdev_passthru.a 00:04:40.121 CC module/bdev/raid/bdev_raid.o 00:04:40.121 SYMLINK libspdk_bdev_gpt.so 00:04:40.121 SYMLINK libspdk_bdev_null.so 00:04:40.121 SO libspdk_bdev_passthru.so.5.0 00:04:40.121 SYMLINK libspdk_bdev_delay.so 00:04:40.121 CC module/bdev/raid/bdev_raid_rpc.o 00:04:40.121 SYMLINK libspdk_bdev_malloc.so 00:04:40.121 CC module/bdev/raid/bdev_raid_sb.o 00:04:40.380 SYMLINK libspdk_bdev_passthru.so 00:04:40.380 CC module/bdev/split/vbdev_split.o 00:04:40.380 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:40.380 LIB libspdk_bdev_lvol.a 00:04:40.380 SO libspdk_bdev_lvol.so.5.0 00:04:40.380 CC module/bdev/aio/bdev_aio.o 00:04:40.380 CC module/bdev/aio/bdev_aio_rpc.o 00:04:40.638 SYMLINK libspdk_bdev_lvol.so 00:04:40.638 CC module/bdev/split/vbdev_split_rpc.o 00:04:40.638 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:40.638 CC module/bdev/raid/raid0.o 00:04:40.638 CC module/bdev/raid/raid1.o 00:04:40.897 LIB libspdk_bdev_split.a 00:04:40.897 CC module/bdev/ftl/bdev_ftl.o 00:04:40.897 LIB libspdk_bdev_zone_block.a 00:04:40.897 SO libspdk_bdev_split.so.5.0 00:04:40.897 SO libspdk_bdev_zone_block.so.5.0 00:04:40.897 LIB libspdk_bdev_aio.a 00:04:40.897 SYMLINK libspdk_bdev_split.so 00:04:41.156 CC module/bdev/nvme/bdev_mdns_client.o 00:04:41.156 SYMLINK libspdk_bdev_zone_block.so 00:04:41.156 CC module/bdev/nvme/vbdev_opal.o 00:04:41.156 CC module/bdev/iscsi/bdev_iscsi.o 00:04:41.156 SO libspdk_bdev_aio.so.5.0 00:04:41.156 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:41.156 SYMLINK libspdk_bdev_aio.so 00:04:41.156 CC module/bdev/raid/concat.o 00:04:41.156 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:41.156 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:41.415 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:41.415 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:41.415 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:41.415 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:41.673 LIB libspdk_bdev_ftl.a 00:04:41.673 LIB libspdk_bdev_raid.a 00:04:41.673 SO libspdk_bdev_ftl.so.5.0 00:04:41.673 LIB libspdk_bdev_iscsi.a 00:04:41.673 SO libspdk_bdev_raid.so.5.0 00:04:41.673 SO libspdk_bdev_iscsi.so.5.0 00:04:41.673 SYMLINK libspdk_bdev_ftl.so 00:04:41.673 SYMLINK libspdk_bdev_iscsi.so 00:04:41.673 SYMLINK libspdk_bdev_raid.so 00:04:41.930 LIB libspdk_bdev_virtio.a 00:04:41.930 SO libspdk_bdev_virtio.so.5.0 00:04:41.930 SYMLINK libspdk_bdev_virtio.so 00:04:42.495 LIB libspdk_bdev_nvme.a 00:04:42.752 SO libspdk_bdev_nvme.so.6.0 00:04:42.752 SYMLINK libspdk_bdev_nvme.so 00:04:43.010 CC module/event/subsystems/iobuf/iobuf.o 00:04:43.010 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:43.010 CC module/event/subsystems/vmd/vmd.o 00:04:43.010 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:43.010 CC module/event/subsystems/scheduler/scheduler.o 00:04:43.010 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:43.010 CC module/event/subsystems/sock/sock.o 00:04:43.269 LIB libspdk_event_sock.a 00:04:43.269 LIB libspdk_event_scheduler.a 00:04:43.269 LIB libspdk_event_vmd.a 00:04:43.269 LIB libspdk_event_vhost_blk.a 00:04:43.269 SO libspdk_event_sock.so.4.0 00:04:43.269 SO libspdk_event_scheduler.so.3.0 00:04:43.269 SO libspdk_event_vhost_blk.so.2.0 00:04:43.269 LIB libspdk_event_iobuf.a 00:04:43.526 SO libspdk_event_vmd.so.5.0 00:04:43.526 SYMLINK libspdk_event_sock.so 00:04:43.526 SO libspdk_event_iobuf.so.2.0 00:04:43.526 SYMLINK libspdk_event_scheduler.so 00:04:43.526 SYMLINK libspdk_event_vhost_blk.so 00:04:43.526 SYMLINK libspdk_event_vmd.so 00:04:43.526 SYMLINK libspdk_event_iobuf.so 00:04:43.526 CC module/event/subsystems/accel/accel.o 00:04:43.786 LIB libspdk_event_accel.a 00:04:43.786 SO libspdk_event_accel.so.5.0 00:04:43.786 SYMLINK libspdk_event_accel.so 00:04:44.045 CC module/event/subsystems/bdev/bdev.o 00:04:44.325 LIB libspdk_event_bdev.a 00:04:44.325 SO libspdk_event_bdev.so.5.0 00:04:44.325 SYMLINK libspdk_event_bdev.so 00:04:44.607 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:44.607 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:44.607 CC module/event/subsystems/scsi/scsi.o 00:04:44.607 CC module/event/subsystems/ublk/ublk.o 00:04:44.607 CC module/event/subsystems/nbd/nbd.o 00:04:44.607 LIB libspdk_event_ublk.a 00:04:44.607 LIB libspdk_event_nbd.a 00:04:44.607 SO libspdk_event_nbd.so.5.0 00:04:44.607 SO libspdk_event_ublk.so.2.0 00:04:44.865 LIB libspdk_event_scsi.a 00:04:44.865 SYMLINK libspdk_event_nbd.so 00:04:44.865 SYMLINK libspdk_event_ublk.so 00:04:44.865 SO libspdk_event_scsi.so.5.0 00:04:44.865 SYMLINK libspdk_event_scsi.so 00:04:44.865 LIB libspdk_event_nvmf.a 00:04:44.865 SO libspdk_event_nvmf.so.5.0 00:04:44.865 SYMLINK libspdk_event_nvmf.so 00:04:44.865 CC module/event/subsystems/iscsi/iscsi.o 00:04:45.122 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:45.122 LIB libspdk_event_vhost_scsi.a 00:04:45.122 LIB libspdk_event_iscsi.a 00:04:45.122 SO libspdk_event_vhost_scsi.so.2.0 00:04:45.122 SO libspdk_event_iscsi.so.5.0 00:04:45.380 SYMLINK libspdk_event_iscsi.so 00:04:45.380 SYMLINK libspdk_event_vhost_scsi.so 00:04:45.380 SO libspdk.so.5.0 00:04:45.380 SYMLINK libspdk.so 00:04:45.639 CC app/trace_record/trace_record.o 00:04:45.639 CXX app/trace/trace.o 00:04:45.639 CC app/spdk_nvme_perf/perf.o 00:04:45.639 CC app/spdk_lspci/spdk_lspci.o 00:04:45.639 CC app/iscsi_tgt/iscsi_tgt.o 00:04:45.639 CC app/nvmf_tgt/nvmf_main.o 00:04:45.639 CC app/spdk_tgt/spdk_tgt.o 00:04:45.639 CC examples/accel/perf/accel_perf.o 00:04:45.639 CC examples/bdev/hello_world/hello_bdev.o 00:04:45.639 CC test/accel/dif/dif.o 00:04:45.897 LINK nvmf_tgt 00:04:45.897 LINK spdk_lspci 00:04:45.897 LINK iscsi_tgt 00:04:45.897 LINK spdk_trace_record 00:04:46.159 LINK spdk_tgt 00:04:46.159 LINK hello_bdev 00:04:46.159 LINK spdk_trace 00:04:46.159 CC app/spdk_nvme_identify/identify.o 00:04:46.416 CC app/spdk_nvme_discover/discovery_aer.o 00:04:46.416 LINK dif 00:04:46.416 CC app/spdk_top/spdk_top.o 00:04:46.416 LINK accel_perf 00:04:46.673 CC test/app/bdev_svc/bdev_svc.o 00:04:46.673 CC examples/bdev/bdevperf/bdevperf.o 00:04:46.673 CC app/vhost/vhost.o 00:04:46.673 LINK spdk_nvme_discover 00:04:46.930 CC test/bdev/bdevio/bdevio.o 00:04:46.930 TEST_HEADER include/spdk/accel.h 00:04:46.930 TEST_HEADER include/spdk/accel_module.h 00:04:46.930 TEST_HEADER include/spdk/assert.h 00:04:46.930 TEST_HEADER include/spdk/barrier.h 00:04:46.930 TEST_HEADER include/spdk/base64.h 00:04:46.930 LINK bdev_svc 00:04:46.931 TEST_HEADER include/spdk/bdev.h 00:04:46.931 TEST_HEADER include/spdk/bdev_module.h 00:04:46.931 TEST_HEADER include/spdk/bdev_zone.h 00:04:46.931 TEST_HEADER include/spdk/bit_array.h 00:04:46.931 TEST_HEADER include/spdk/bit_pool.h 00:04:46.931 TEST_HEADER include/spdk/blob_bdev.h 00:04:46.931 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:46.931 TEST_HEADER include/spdk/blobfs.h 00:04:46.931 TEST_HEADER include/spdk/blob.h 00:04:46.931 TEST_HEADER include/spdk/conf.h 00:04:46.931 TEST_HEADER include/spdk/config.h 00:04:46.931 TEST_HEADER include/spdk/cpuset.h 00:04:46.931 TEST_HEADER include/spdk/crc16.h 00:04:46.931 LINK vhost 00:04:46.931 TEST_HEADER include/spdk/crc32.h 00:04:46.931 TEST_HEADER include/spdk/crc64.h 00:04:46.931 TEST_HEADER include/spdk/dif.h 00:04:46.931 TEST_HEADER include/spdk/dma.h 00:04:46.931 CC test/blobfs/mkfs/mkfs.o 00:04:46.931 TEST_HEADER include/spdk/endian.h 00:04:46.931 TEST_HEADER include/spdk/env_dpdk.h 00:04:46.931 TEST_HEADER include/spdk/env.h 00:04:46.931 TEST_HEADER include/spdk/event.h 00:04:46.931 TEST_HEADER include/spdk/fd_group.h 00:04:46.931 TEST_HEADER include/spdk/fd.h 00:04:46.931 TEST_HEADER include/spdk/file.h 00:04:46.931 TEST_HEADER include/spdk/ftl.h 00:04:46.931 TEST_HEADER include/spdk/gpt_spec.h 00:04:46.931 TEST_HEADER include/spdk/hexlify.h 00:04:46.931 TEST_HEADER include/spdk/histogram_data.h 00:04:46.931 TEST_HEADER include/spdk/idxd.h 00:04:46.931 TEST_HEADER include/spdk/idxd_spec.h 00:04:46.931 TEST_HEADER include/spdk/init.h 00:04:46.931 TEST_HEADER include/spdk/ioat.h 00:04:46.931 TEST_HEADER include/spdk/ioat_spec.h 00:04:46.931 LINK spdk_nvme_perf 00:04:46.931 TEST_HEADER include/spdk/iscsi_spec.h 00:04:46.931 TEST_HEADER include/spdk/json.h 00:04:46.931 TEST_HEADER include/spdk/jsonrpc.h 00:04:46.931 TEST_HEADER include/spdk/likely.h 00:04:46.931 TEST_HEADER include/spdk/log.h 00:04:46.931 TEST_HEADER include/spdk/lvol.h 00:04:46.931 TEST_HEADER include/spdk/memory.h 00:04:46.931 TEST_HEADER include/spdk/mmio.h 00:04:46.931 TEST_HEADER include/spdk/nbd.h 00:04:46.931 TEST_HEADER include/spdk/notify.h 00:04:46.931 TEST_HEADER include/spdk/nvme.h 00:04:46.931 TEST_HEADER include/spdk/nvme_intel.h 00:04:46.931 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:46.931 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:46.931 TEST_HEADER include/spdk/nvme_spec.h 00:04:46.931 TEST_HEADER include/spdk/nvme_zns.h 00:04:46.931 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:46.931 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:46.931 TEST_HEADER include/spdk/nvmf.h 00:04:46.931 TEST_HEADER include/spdk/nvmf_spec.h 00:04:46.931 TEST_HEADER include/spdk/nvmf_transport.h 00:04:47.188 TEST_HEADER include/spdk/opal.h 00:04:47.188 TEST_HEADER include/spdk/opal_spec.h 00:04:47.188 TEST_HEADER include/spdk/pci_ids.h 00:04:47.188 TEST_HEADER include/spdk/pipe.h 00:04:47.188 TEST_HEADER include/spdk/queue.h 00:04:47.188 TEST_HEADER include/spdk/reduce.h 00:04:47.188 TEST_HEADER include/spdk/rpc.h 00:04:47.188 TEST_HEADER include/spdk/scheduler.h 00:04:47.188 TEST_HEADER include/spdk/scsi.h 00:04:47.188 TEST_HEADER include/spdk/scsi_spec.h 00:04:47.188 TEST_HEADER include/spdk/sock.h 00:04:47.188 TEST_HEADER include/spdk/stdinc.h 00:04:47.188 TEST_HEADER include/spdk/string.h 00:04:47.188 TEST_HEADER include/spdk/thread.h 00:04:47.188 TEST_HEADER include/spdk/trace.h 00:04:47.188 TEST_HEADER include/spdk/trace_parser.h 00:04:47.188 TEST_HEADER include/spdk/tree.h 00:04:47.188 TEST_HEADER include/spdk/ublk.h 00:04:47.188 TEST_HEADER include/spdk/util.h 00:04:47.188 TEST_HEADER include/spdk/uuid.h 00:04:47.188 TEST_HEADER include/spdk/version.h 00:04:47.188 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:47.188 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:47.188 TEST_HEADER include/spdk/vhost.h 00:04:47.188 TEST_HEADER include/spdk/vmd.h 00:04:47.188 TEST_HEADER include/spdk/xor.h 00:04:47.188 TEST_HEADER include/spdk/zipf.h 00:04:47.188 CXX test/cpp_headers/accel.o 00:04:47.188 CC test/dma/test_dma/test_dma.o 00:04:47.188 CXX test/cpp_headers/accel_module.o 00:04:47.188 CXX test/cpp_headers/assert.o 00:04:47.188 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:47.445 LINK mkfs 00:04:47.445 LINK bdevio 00:04:47.445 CC app/spdk_dd/spdk_dd.o 00:04:47.703 CXX test/cpp_headers/barrier.o 00:04:47.703 LINK test_dma 00:04:47.703 LINK spdk_top 00:04:47.703 LINK spdk_nvme_identify 00:04:47.703 CXX test/cpp_headers/base64.o 00:04:47.703 CC app/fio/nvme/fio_plugin.o 00:04:47.703 LINK nvme_fuzz 00:04:47.703 CXX test/cpp_headers/bdev.o 00:04:47.703 LINK bdevperf 00:04:47.960 CXX test/cpp_headers/bdev_module.o 00:04:47.960 CXX test/cpp_headers/bdev_zone.o 00:04:47.960 CXX test/cpp_headers/bit_array.o 00:04:47.960 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:47.960 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:47.960 LINK spdk_dd 00:04:47.960 CC app/fio/bdev/fio_plugin.o 00:04:47.960 CXX test/cpp_headers/bit_pool.o 00:04:48.215 CXX test/cpp_headers/blob_bdev.o 00:04:48.215 CXX test/cpp_headers/blobfs_bdev.o 00:04:48.215 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:48.215 CXX test/cpp_headers/blobfs.o 00:04:48.215 CC examples/blob/hello_world/hello_blob.o 00:04:48.472 CXX test/cpp_headers/blob.o 00:04:48.472 LINK spdk_nvme 00:04:48.472 CC examples/blob/cli/blobcli.o 00:04:48.472 CC test/env/vtophys/vtophys.o 00:04:48.472 CC test/env/mem_callbacks/mem_callbacks.o 00:04:48.472 CXX test/cpp_headers/conf.o 00:04:48.730 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:48.730 CC test/env/memory/memory_ut.o 00:04:48.730 LINK hello_blob 00:04:48.730 LINK vtophys 00:04:48.730 LINK mem_callbacks 00:04:48.988 CXX test/cpp_headers/config.o 00:04:48.988 LINK spdk_bdev 00:04:48.988 LINK env_dpdk_post_init 00:04:48.988 CXX test/cpp_headers/cpuset.o 00:04:48.988 LINK vhost_fuzz 00:04:48.988 CC test/app/histogram_perf/histogram_perf.o 00:04:48.988 LINK blobcli 00:04:48.988 CC test/app/jsoncat/jsoncat.o 00:04:49.246 CC test/app/stub/stub.o 00:04:49.246 CXX test/cpp_headers/crc16.o 00:04:49.246 CC test/env/pci/pci_ut.o 00:04:49.246 CC test/event/event_perf/event_perf.o 00:04:49.246 LINK histogram_perf 00:04:49.246 LINK jsoncat 00:04:49.505 CC test/event/reactor/reactor.o 00:04:49.505 LINK memory_ut 00:04:49.505 LINK stub 00:04:49.505 CXX test/cpp_headers/crc32.o 00:04:49.505 LINK event_perf 00:04:49.764 CC examples/ioat/perf/perf.o 00:04:49.764 LINK reactor 00:04:49.764 CC examples/ioat/verify/verify.o 00:04:49.764 CXX test/cpp_headers/crc64.o 00:04:49.764 LINK pci_ut 00:04:50.021 CC test/event/reactor_perf/reactor_perf.o 00:04:50.021 CC test/event/app_repeat/app_repeat.o 00:04:50.021 CC test/lvol/esnap/esnap.o 00:04:50.021 CXX test/cpp_headers/dif.o 00:04:50.021 LINK ioat_perf 00:04:50.021 CC test/event/scheduler/scheduler.o 00:04:50.021 CXX test/cpp_headers/dma.o 00:04:50.280 LINK verify 00:04:50.280 LINK reactor_perf 00:04:50.280 LINK app_repeat 00:04:50.280 CXX test/cpp_headers/endian.o 00:04:50.280 CXX test/cpp_headers/env_dpdk.o 00:04:50.280 CC test/rpc_client/rpc_client_test.o 00:04:50.539 CC test/nvme/aer/aer.o 00:04:50.539 LINK scheduler 00:04:50.539 CXX test/cpp_headers/env.o 00:04:50.539 CC examples/nvme/hello_world/hello_world.o 00:04:50.539 CC test/nvme/reset/reset.o 00:04:50.539 CC test/thread/poller_perf/poller_perf.o 00:04:50.539 CC examples/sock/hello_world/hello_sock.o 00:04:50.797 CXX test/cpp_headers/event.o 00:04:50.797 LINK rpc_client_test 00:04:50.797 LINK poller_perf 00:04:50.797 CXX test/cpp_headers/fd_group.o 00:04:50.797 LINK hello_world 00:04:50.797 LINK iscsi_fuzz 00:04:50.797 LINK aer 00:04:50.797 LINK hello_sock 00:04:51.055 CXX test/cpp_headers/fd.o 00:04:51.055 LINK reset 00:04:51.055 CC examples/vmd/lsvmd/lsvmd.o 00:04:51.055 CC examples/nvme/reconnect/reconnect.o 00:04:51.055 CC examples/nvmf/nvmf/nvmf.o 00:04:51.055 CXX test/cpp_headers/file.o 00:04:51.314 CC examples/util/zipf/zipf.o 00:04:51.314 CC examples/thread/thread/thread_ex.o 00:04:51.314 LINK lsvmd 00:04:51.314 CC examples/vmd/led/led.o 00:04:51.314 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:51.314 CC test/nvme/sgl/sgl.o 00:04:51.572 CXX test/cpp_headers/ftl.o 00:04:51.572 LINK zipf 00:04:51.572 CXX test/cpp_headers/gpt_spec.o 00:04:51.572 LINK led 00:04:51.572 LINK thread 00:04:51.572 CXX test/cpp_headers/hexlify.o 00:04:51.832 LINK nvmf 00:04:51.832 LINK reconnect 00:04:51.832 CC examples/nvme/arbitration/arbitration.o 00:04:51.832 LINK sgl 00:04:51.832 CC examples/nvme/hotplug/hotplug.o 00:04:51.832 CXX test/cpp_headers/histogram_data.o 00:04:52.090 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:52.090 CC examples/nvme/abort/abort.o 00:04:52.090 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:52.090 CC examples/idxd/perf/perf.o 00:04:52.090 CC test/nvme/e2edp/nvme_dp.o 00:04:52.090 LINK nvme_manage 00:04:52.349 CXX test/cpp_headers/idxd.o 00:04:52.349 LINK arbitration 00:04:52.349 LINK cmb_copy 00:04:52.349 LINK hotplug 00:04:52.349 CXX test/cpp_headers/idxd_spec.o 00:04:52.349 LINK pmr_persistence 00:04:52.607 CXX test/cpp_headers/init.o 00:04:52.607 CXX test/cpp_headers/ioat.o 00:04:52.607 CXX test/cpp_headers/ioat_spec.o 00:04:52.607 LINK nvme_dp 00:04:52.607 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:52.865 LINK abort 00:04:52.865 LINK idxd_perf 00:04:52.865 CC test/nvme/overhead/overhead.o 00:04:52.865 CC test/nvme/err_injection/err_injection.o 00:04:52.865 CXX test/cpp_headers/iscsi_spec.o 00:04:52.865 CC test/nvme/startup/startup.o 00:04:52.865 CC test/nvme/reserve/reserve.o 00:04:53.124 CC test/nvme/simple_copy/simple_copy.o 00:04:53.124 LINK interrupt_tgt 00:04:53.124 CXX test/cpp_headers/json.o 00:04:53.124 CC test/nvme/connect_stress/connect_stress.o 00:04:53.124 LINK reserve 00:04:53.124 LINK err_injection 00:04:53.124 LINK startup 00:04:53.382 LINK simple_copy 00:04:53.382 LINK overhead 00:04:53.382 LINK connect_stress 00:04:53.382 CXX test/cpp_headers/jsonrpc.o 00:04:53.382 CC test/nvme/compliance/nvme_compliance.o 00:04:53.382 CC test/nvme/boot_partition/boot_partition.o 00:04:53.641 CC test/nvme/fused_ordering/fused_ordering.o 00:04:53.641 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:53.641 CXX test/cpp_headers/likely.o 00:04:53.641 CXX test/cpp_headers/log.o 00:04:53.641 CC test/nvme/fdp/fdp.o 00:04:53.641 LINK boot_partition 00:04:53.641 CC test/nvme/cuse/cuse.o 00:04:53.900 CXX test/cpp_headers/lvol.o 00:04:53.900 CXX test/cpp_headers/memory.o 00:04:53.900 LINK fused_ordering 00:04:53.900 CXX test/cpp_headers/mmio.o 00:04:53.900 LINK doorbell_aers 00:04:54.158 LINK fdp 00:04:54.158 CXX test/cpp_headers/nbd.o 00:04:54.158 CXX test/cpp_headers/notify.o 00:04:54.158 LINK nvme_compliance 00:04:54.158 CXX test/cpp_headers/nvme.o 00:04:54.158 CXX test/cpp_headers/nvme_intel.o 00:04:54.417 CXX test/cpp_headers/nvme_ocssd.o 00:04:54.417 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:54.417 CXX test/cpp_headers/nvme_spec.o 00:04:54.417 CXX test/cpp_headers/nvme_zns.o 00:04:54.417 CXX test/cpp_headers/nvmf_cmd.o 00:04:54.417 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:54.417 CXX test/cpp_headers/nvmf.o 00:04:54.417 CXX test/cpp_headers/nvmf_spec.o 00:04:54.417 CXX test/cpp_headers/nvmf_transport.o 00:04:54.675 CXX test/cpp_headers/opal.o 00:04:54.675 CXX test/cpp_headers/opal_spec.o 00:04:54.675 CXX test/cpp_headers/pci_ids.o 00:04:54.675 CXX test/cpp_headers/pipe.o 00:04:54.675 CXX test/cpp_headers/queue.o 00:04:54.675 CXX test/cpp_headers/reduce.o 00:04:54.675 CXX test/cpp_headers/rpc.o 00:04:54.675 CXX test/cpp_headers/scheduler.o 00:04:54.675 CXX test/cpp_headers/scsi.o 00:04:54.675 CXX test/cpp_headers/scsi_spec.o 00:04:54.933 CXX test/cpp_headers/sock.o 00:04:54.933 CXX test/cpp_headers/stdinc.o 00:04:54.933 CXX test/cpp_headers/string.o 00:04:54.933 CXX test/cpp_headers/thread.o 00:04:54.933 CXX test/cpp_headers/trace.o 00:04:55.191 CXX test/cpp_headers/trace_parser.o 00:04:55.191 CXX test/cpp_headers/tree.o 00:04:55.192 CXX test/cpp_headers/ublk.o 00:04:55.192 CXX test/cpp_headers/util.o 00:04:55.192 LINK cuse 00:04:55.192 CXX test/cpp_headers/uuid.o 00:04:55.192 CXX test/cpp_headers/vfio_user_pci.o 00:04:55.192 CXX test/cpp_headers/version.o 00:04:55.192 CXX test/cpp_headers/vfio_user_spec.o 00:04:55.192 CXX test/cpp_headers/vhost.o 00:04:55.192 CXX test/cpp_headers/vmd.o 00:04:55.450 CXX test/cpp_headers/xor.o 00:04:55.450 CXX test/cpp_headers/zipf.o 00:04:56.826 LINK esnap 00:05:02.093 00:05:02.093 real 1m20.644s 00:05:02.093 user 8m7.692s 00:05:02.093 sys 1m23.541s 00:05:02.093 10:05:20 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:05:02.093 10:05:20 -- common/autotest_common.sh@10 -- $ set +x 00:05:02.093 ************************************ 00:05:02.093 END TEST make 00:05:02.093 ************************************ 00:05:02.093 10:05:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:02.093 10:05:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:02.093 10:05:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:02.093 10:05:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:02.093 10:05:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:02.093 10:05:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:02.093 10:05:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:02.093 10:05:20 -- scripts/common.sh@335 -- # IFS=.-: 00:05:02.093 10:05:20 -- scripts/common.sh@335 -- # read -ra ver1 00:05:02.093 10:05:20 -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.093 10:05:20 -- scripts/common.sh@336 -- # read -ra ver2 00:05:02.093 10:05:20 -- scripts/common.sh@337 -- # local 'op=<' 00:05:02.093 10:05:20 -- scripts/common.sh@339 -- # ver1_l=2 00:05:02.093 10:05:20 -- scripts/common.sh@340 -- # ver2_l=1 00:05:02.093 10:05:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:02.093 10:05:20 -- scripts/common.sh@343 -- # case "$op" in 00:05:02.093 10:05:20 -- scripts/common.sh@344 -- # : 1 00:05:02.093 10:05:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:02.093 10:05:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.093 10:05:20 -- scripts/common.sh@364 -- # decimal 1 00:05:02.093 10:05:20 -- scripts/common.sh@352 -- # local d=1 00:05:02.093 10:05:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.093 10:05:20 -- scripts/common.sh@354 -- # echo 1 00:05:02.093 10:05:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:02.093 10:05:20 -- scripts/common.sh@365 -- # decimal 2 00:05:02.093 10:05:20 -- scripts/common.sh@352 -- # local d=2 00:05:02.093 10:05:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.093 10:05:20 -- scripts/common.sh@354 -- # echo 2 00:05:02.093 10:05:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:02.093 10:05:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:02.093 10:05:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:02.093 10:05:20 -- scripts/common.sh@367 -- # return 0 00:05:02.093 10:05:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.093 10:05:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:02.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.093 --rc genhtml_branch_coverage=1 00:05:02.093 --rc genhtml_function_coverage=1 00:05:02.093 --rc genhtml_legend=1 00:05:02.093 --rc geninfo_all_blocks=1 00:05:02.093 --rc geninfo_unexecuted_blocks=1 00:05:02.093 00:05:02.093 ' 00:05:02.093 10:05:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:02.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.093 --rc genhtml_branch_coverage=1 00:05:02.093 --rc genhtml_function_coverage=1 00:05:02.093 --rc genhtml_legend=1 00:05:02.093 --rc geninfo_all_blocks=1 00:05:02.093 --rc geninfo_unexecuted_blocks=1 00:05:02.093 00:05:02.093 ' 00:05:02.093 10:05:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:02.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.093 --rc genhtml_branch_coverage=1 00:05:02.093 --rc genhtml_function_coverage=1 00:05:02.093 --rc genhtml_legend=1 00:05:02.093 --rc geninfo_all_blocks=1 00:05:02.093 --rc geninfo_unexecuted_blocks=1 00:05:02.093 00:05:02.093 ' 00:05:02.094 10:05:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:02.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.094 --rc genhtml_branch_coverage=1 00:05:02.094 --rc genhtml_function_coverage=1 00:05:02.094 --rc genhtml_legend=1 00:05:02.094 --rc geninfo_all_blocks=1 00:05:02.094 --rc geninfo_unexecuted_blocks=1 00:05:02.094 00:05:02.094 ' 00:05:02.094 10:05:20 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:02.094 10:05:20 -- nvmf/common.sh@7 -- # uname -s 00:05:02.094 10:05:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.094 10:05:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.094 10:05:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.094 10:05:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.094 10:05:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.094 10:05:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.094 10:05:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.094 10:05:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.094 10:05:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.094 10:05:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.094 10:05:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:05:02.094 10:05:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:05:02.094 10:05:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.094 10:05:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.094 10:05:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:02.094 10:05:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:02.094 10:05:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.094 10:05:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.094 10:05:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.094 10:05:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.094 10:05:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.094 10:05:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.094 10:05:20 -- paths/export.sh@5 -- # export PATH 00:05:02.094 10:05:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.094 10:05:20 -- nvmf/common.sh@46 -- # : 0 00:05:02.094 10:05:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:02.094 10:05:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:02.094 10:05:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:02.094 10:05:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.094 10:05:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.094 10:05:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:02.094 10:05:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:02.094 10:05:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:02.094 10:05:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:02.094 10:05:20 -- spdk/autotest.sh@32 -- # uname -s 00:05:02.094 10:05:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:02.094 10:05:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:02.094 10:05:20 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:02.094 10:05:20 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:02.094 10:05:20 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:02.094 10:05:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:02.094 10:05:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:02.094 10:05:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:02.094 10:05:20 -- spdk/autotest.sh@48 -- # udevadm_pid=61521 00:05:02.094 10:05:20 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:05:02.094 10:05:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:02.094 10:05:20 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:05:02.094 10:05:20 -- spdk/autotest.sh@54 -- # echo 61523 00:05:02.094 10:05:20 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:05:02.094 10:05:20 -- spdk/autotest.sh@56 -- # echo 61526 00:05:02.094 10:05:20 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:05:02.094 10:05:20 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:02.094 10:05:20 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:05:02.094 10:05:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.094 10:05:20 -- common/autotest_common.sh@10 -- # set +x 00:05:02.094 10:05:20 -- spdk/autotest.sh@70 -- # create_test_list 00:05:02.094 10:05:20 -- common/autotest_common.sh@746 -- # xtrace_disable 00:05:02.094 10:05:20 -- common/autotest_common.sh@10 -- # set +x 00:05:02.094 10:05:20 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:02.094 10:05:20 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:02.094 10:05:20 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:05:02.094 10:05:20 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:02.094 10:05:20 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:05:02.094 10:05:20 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:05:02.094 10:05:20 -- common/autotest_common.sh@1450 -- # uname 00:05:02.094 10:05:20 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:05:02.094 10:05:20 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:05:02.094 10:05:20 -- common/autotest_common.sh@1470 -- # uname 00:05:02.094 10:05:20 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:05:02.094 10:05:20 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:05:02.094 10:05:20 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:02.094 lcov: LCOV version 1.15 00:05:02.094 10:05:21 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:12.097 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:12.097 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:12.097 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:12.097 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:12.097 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:12.097 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:38.650 10:05:55 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:05:38.650 10:05:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.650 10:05:55 -- common/autotest_common.sh@10 -- # set +x 00:05:38.650 10:05:55 -- spdk/autotest.sh@89 -- # rm -f 00:05:38.650 10:05:55 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.650 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.650 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:38.650 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:38.650 10:05:55 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:05:38.650 10:05:55 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:38.650 10:05:55 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:38.650 10:05:55 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:38.650 10:05:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:38.650 10:05:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:38.650 10:05:55 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:38.650 10:05:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:38.650 10:05:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:38.650 10:05:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:38.650 10:05:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:38.650 10:05:55 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:38.650 10:05:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:38.650 10:05:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:38.650 10:05:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:38.650 10:05:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:38.650 10:05:55 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:38.650 10:05:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:38.650 10:05:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:38.650 10:05:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:38.650 10:05:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:38.650 10:05:55 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:38.650 10:05:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:38.650 10:05:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:38.650 10:05:55 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:05:38.650 10:05:55 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:38.650 10:05:55 -- spdk/autotest.sh@108 -- # grep -v p 00:05:38.650 10:05:55 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:38.650 10:05:55 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:38.650 10:05:55 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:05:38.650 10:05:55 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:38.650 10:05:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:38.650 No valid GPT data, bailing 00:05:38.650 10:05:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:38.650 10:05:55 -- scripts/common.sh@393 -- # pt= 00:05:38.650 10:05:55 -- scripts/common.sh@394 -- # return 1 00:05:38.650 10:05:55 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:38.650 1+0 records in 00:05:38.650 1+0 records out 00:05:38.650 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498998 s, 210 MB/s 00:05:38.650 10:05:55 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:38.650 10:05:55 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:38.650 10:05:55 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:05:38.650 10:05:55 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:38.650 10:05:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:38.650 No valid GPT data, bailing 00:05:38.650 10:05:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:38.650 10:05:55 -- scripts/common.sh@393 -- # pt= 00:05:38.650 10:05:55 -- scripts/common.sh@394 -- # return 1 00:05:38.650 10:05:55 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:38.650 1+0 records in 00:05:38.650 1+0 records out 00:05:38.650 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00445255 s, 236 MB/s 00:05:38.650 10:05:55 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:38.650 10:05:55 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:38.650 10:05:55 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:05:38.650 10:05:55 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:38.650 10:05:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:38.650 No valid GPT data, bailing 00:05:38.650 10:05:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:38.650 10:05:55 -- scripts/common.sh@393 -- # pt= 00:05:38.650 10:05:55 -- scripts/common.sh@394 -- # return 1 00:05:38.650 10:05:55 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:38.650 1+0 records in 00:05:38.650 1+0 records out 00:05:38.650 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00478699 s, 219 MB/s 00:05:38.650 10:05:55 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:38.650 10:05:55 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:38.650 10:05:55 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:05:38.650 10:05:55 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:38.650 10:05:55 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:38.650 No valid GPT data, bailing 00:05:38.650 10:05:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:38.650 10:05:56 -- scripts/common.sh@393 -- # pt= 00:05:38.650 10:05:56 -- scripts/common.sh@394 -- # return 1 00:05:38.650 10:05:56 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:38.650 1+0 records in 00:05:38.650 1+0 records out 00:05:38.650 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491167 s, 213 MB/s 00:05:38.650 10:05:56 -- spdk/autotest.sh@116 -- # sync 00:05:38.650 10:05:56 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:38.650 10:05:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:38.650 10:05:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:38.650 10:05:57 -- spdk/autotest.sh@122 -- # uname -s 00:05:38.650 10:05:57 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:38.650 10:05:57 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:38.650 10:05:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.650 10:05:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.650 10:05:57 -- common/autotest_common.sh@10 -- # set +x 00:05:38.650 ************************************ 00:05:38.650 START TEST setup.sh 00:05:38.650 ************************************ 00:05:38.650 10:05:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:38.650 * Looking for test storage... 00:05:38.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:38.650 10:05:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:38.650 10:05:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:38.650 10:05:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:38.650 10:05:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:38.650 10:05:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:38.650 10:05:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:38.650 10:05:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:38.650 10:05:57 -- scripts/common.sh@335 -- # IFS=.-: 00:05:38.650 10:05:57 -- scripts/common.sh@335 -- # read -ra ver1 00:05:38.650 10:05:57 -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.650 10:05:57 -- scripts/common.sh@336 -- # read -ra ver2 00:05:38.650 10:05:57 -- scripts/common.sh@337 -- # local 'op=<' 00:05:38.650 10:05:57 -- scripts/common.sh@339 -- # ver1_l=2 00:05:38.650 10:05:57 -- scripts/common.sh@340 -- # ver2_l=1 00:05:38.650 10:05:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:38.650 10:05:57 -- scripts/common.sh@343 -- # case "$op" in 00:05:38.650 10:05:57 -- scripts/common.sh@344 -- # : 1 00:05:38.650 10:05:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:38.650 10:05:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.650 10:05:57 -- scripts/common.sh@364 -- # decimal 1 00:05:38.650 10:05:57 -- scripts/common.sh@352 -- # local d=1 00:05:38.650 10:05:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.650 10:05:57 -- scripts/common.sh@354 -- # echo 1 00:05:38.650 10:05:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:38.650 10:05:57 -- scripts/common.sh@365 -- # decimal 2 00:05:38.650 10:05:57 -- scripts/common.sh@352 -- # local d=2 00:05:38.650 10:05:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.650 10:05:57 -- scripts/common.sh@354 -- # echo 2 00:05:38.650 10:05:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:38.650 10:05:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:38.650 10:05:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:38.650 10:05:57 -- scripts/common.sh@367 -- # return 0 00:05:38.650 10:05:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.650 10:05:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:38.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.650 --rc genhtml_branch_coverage=1 00:05:38.650 --rc genhtml_function_coverage=1 00:05:38.650 --rc genhtml_legend=1 00:05:38.650 --rc geninfo_all_blocks=1 00:05:38.650 --rc geninfo_unexecuted_blocks=1 00:05:38.650 00:05:38.650 ' 00:05:38.650 10:05:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:38.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.650 --rc genhtml_branch_coverage=1 00:05:38.650 --rc genhtml_function_coverage=1 00:05:38.650 --rc genhtml_legend=1 00:05:38.650 --rc geninfo_all_blocks=1 00:05:38.650 --rc geninfo_unexecuted_blocks=1 00:05:38.650 00:05:38.650 ' 00:05:38.650 10:05:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:38.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.651 --rc genhtml_branch_coverage=1 00:05:38.651 --rc genhtml_function_coverage=1 00:05:38.651 --rc genhtml_legend=1 00:05:38.651 --rc geninfo_all_blocks=1 00:05:38.651 --rc geninfo_unexecuted_blocks=1 00:05:38.651 00:05:38.651 ' 00:05:38.651 10:05:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:38.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.651 --rc genhtml_branch_coverage=1 00:05:38.651 --rc genhtml_function_coverage=1 00:05:38.651 --rc genhtml_legend=1 00:05:38.651 --rc geninfo_all_blocks=1 00:05:38.651 --rc geninfo_unexecuted_blocks=1 00:05:38.651 00:05:38.651 ' 00:05:38.651 10:05:57 -- setup/test-setup.sh@10 -- # uname -s 00:05:38.651 10:05:57 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:38.651 10:05:57 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:38.651 10:05:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.651 10:05:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.651 10:05:57 -- common/autotest_common.sh@10 -- # set +x 00:05:38.651 ************************************ 00:05:38.651 START TEST acl 00:05:38.651 ************************************ 00:05:38.651 10:05:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:38.651 * Looking for test storage... 00:05:38.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:38.651 10:05:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:38.651 10:05:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:38.651 10:05:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:38.651 10:05:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:38.651 10:05:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:38.651 10:05:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:38.651 10:05:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:38.651 10:05:58 -- scripts/common.sh@335 -- # IFS=.-: 00:05:38.651 10:05:58 -- scripts/common.sh@335 -- # read -ra ver1 00:05:38.651 10:05:58 -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.651 10:05:58 -- scripts/common.sh@336 -- # read -ra ver2 00:05:38.651 10:05:58 -- scripts/common.sh@337 -- # local 'op=<' 00:05:38.651 10:05:58 -- scripts/common.sh@339 -- # ver1_l=2 00:05:38.651 10:05:58 -- scripts/common.sh@340 -- # ver2_l=1 00:05:38.651 10:05:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:38.651 10:05:58 -- scripts/common.sh@343 -- # case "$op" in 00:05:38.651 10:05:58 -- scripts/common.sh@344 -- # : 1 00:05:38.651 10:05:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:38.651 10:05:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.651 10:05:58 -- scripts/common.sh@364 -- # decimal 1 00:05:38.651 10:05:58 -- scripts/common.sh@352 -- # local d=1 00:05:38.651 10:05:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.651 10:05:58 -- scripts/common.sh@354 -- # echo 1 00:05:38.651 10:05:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:38.651 10:05:58 -- scripts/common.sh@365 -- # decimal 2 00:05:38.651 10:05:58 -- scripts/common.sh@352 -- # local d=2 00:05:38.651 10:05:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.651 10:05:58 -- scripts/common.sh@354 -- # echo 2 00:05:38.651 10:05:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:38.651 10:05:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:38.651 10:05:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:38.651 10:05:58 -- scripts/common.sh@367 -- # return 0 00:05:38.651 10:05:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.651 10:05:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:38.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.651 --rc genhtml_branch_coverage=1 00:05:38.651 --rc genhtml_function_coverage=1 00:05:38.651 --rc genhtml_legend=1 00:05:38.651 --rc geninfo_all_blocks=1 00:05:38.651 --rc geninfo_unexecuted_blocks=1 00:05:38.651 00:05:38.651 ' 00:05:38.651 10:05:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:38.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.651 --rc genhtml_branch_coverage=1 00:05:38.651 --rc genhtml_function_coverage=1 00:05:38.651 --rc genhtml_legend=1 00:05:38.651 --rc geninfo_all_blocks=1 00:05:38.651 --rc geninfo_unexecuted_blocks=1 00:05:38.651 00:05:38.651 ' 00:05:38.651 10:05:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:38.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.651 --rc genhtml_branch_coverage=1 00:05:38.651 --rc genhtml_function_coverage=1 00:05:38.651 --rc genhtml_legend=1 00:05:38.651 --rc geninfo_all_blocks=1 00:05:38.651 --rc geninfo_unexecuted_blocks=1 00:05:38.651 00:05:38.651 ' 00:05:38.651 10:05:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:38.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.651 --rc genhtml_branch_coverage=1 00:05:38.651 --rc genhtml_function_coverage=1 00:05:38.651 --rc genhtml_legend=1 00:05:38.651 --rc geninfo_all_blocks=1 00:05:38.651 --rc geninfo_unexecuted_blocks=1 00:05:38.651 00:05:38.651 ' 00:05:38.651 10:05:58 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:38.651 10:05:58 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:38.651 10:05:58 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:38.651 10:05:58 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:38.651 10:05:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:38.651 10:05:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:38.651 10:05:58 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:38.651 10:05:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:38.651 10:05:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:38.651 10:05:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:38.651 10:05:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:38.651 10:05:58 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:38.651 10:05:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:38.651 10:05:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:38.651 10:05:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:38.651 10:05:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:38.651 10:05:58 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:38.651 10:05:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:38.651 10:05:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:38.651 10:05:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:38.651 10:05:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:38.651 10:05:58 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:38.651 10:05:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:38.651 10:05:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:38.651 10:05:58 -- setup/acl.sh@12 -- # devs=() 00:05:38.651 10:05:58 -- setup/acl.sh@12 -- # declare -a devs 00:05:38.651 10:05:58 -- setup/acl.sh@13 -- # drivers=() 00:05:38.651 10:05:58 -- setup/acl.sh@13 -- # declare -A drivers 00:05:38.651 10:05:58 -- setup/acl.sh@51 -- # setup reset 00:05:38.651 10:05:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:38.651 10:05:58 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.586 10:05:58 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:39.586 10:05:58 -- setup/acl.sh@16 -- # local dev driver 00:05:39.586 10:05:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:39.586 10:05:58 -- setup/acl.sh@15 -- # setup output status 00:05:39.586 10:05:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.586 10:05:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:39.586 Hugepages 00:05:39.586 node hugesize free / total 00:05:39.586 10:05:58 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:39.586 10:05:58 -- setup/acl.sh@19 -- # continue 00:05:39.586 10:05:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:39.586 00:05:39.586 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:39.586 10:05:58 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:39.586 10:05:58 -- setup/acl.sh@19 -- # continue 00:05:39.586 10:05:58 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:39.586 10:05:59 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:39.586 10:05:59 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:39.586 10:05:59 -- setup/acl.sh@20 -- # continue 00:05:39.586 10:05:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:39.586 10:05:59 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:39.586 10:05:59 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:39.586 10:05:59 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:39.586 10:05:59 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:39.586 10:05:59 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:39.586 10:05:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:39.912 10:05:59 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:39.912 10:05:59 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:39.912 10:05:59 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:39.912 10:05:59 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:39.912 10:05:59 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:39.912 10:05:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:39.912 10:05:59 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:39.912 10:05:59 -- setup/acl.sh@54 -- # run_test denied denied 00:05:39.912 10:05:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.912 10:05:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.912 10:05:59 -- common/autotest_common.sh@10 -- # set +x 00:05:39.912 ************************************ 00:05:39.912 START TEST denied 00:05:39.912 ************************************ 00:05:39.912 10:05:59 -- common/autotest_common.sh@1114 -- # denied 00:05:39.912 10:05:59 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:39.912 10:05:59 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:39.912 10:05:59 -- setup/acl.sh@38 -- # setup output config 00:05:39.912 10:05:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.912 10:05:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:40.477 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:40.478 10:05:59 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:40.478 10:05:59 -- setup/acl.sh@28 -- # local dev driver 00:05:40.478 10:05:59 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:40.478 10:05:59 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:40.478 10:05:59 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:40.478 10:05:59 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:40.478 10:05:59 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:40.478 10:05:59 -- setup/acl.sh@41 -- # setup reset 00:05:40.478 10:05:59 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:40.478 10:05:59 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.045 00:05:41.045 real 0m1.343s 00:05:41.045 user 0m0.557s 00:05:41.045 sys 0m0.746s 00:05:41.045 10:06:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.045 10:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:41.045 ************************************ 00:05:41.045 END TEST denied 00:05:41.045 ************************************ 00:05:41.045 10:06:00 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:41.045 10:06:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.045 10:06:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.045 10:06:00 -- common/autotest_common.sh@10 -- # set +x 00:05:41.045 ************************************ 00:05:41.045 START TEST allowed 00:05:41.045 ************************************ 00:05:41.045 10:06:00 -- common/autotest_common.sh@1114 -- # allowed 00:05:41.045 10:06:00 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:41.045 10:06:00 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:41.045 10:06:00 -- setup/acl.sh@45 -- # setup output config 00:05:41.045 10:06:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.045 10:06:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:41.980 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:41.980 10:06:01 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:41.980 10:06:01 -- setup/acl.sh@28 -- # local dev driver 00:05:41.980 10:06:01 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:41.980 10:06:01 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:41.980 10:06:01 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:41.980 10:06:01 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:41.980 10:06:01 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:41.980 10:06:01 -- setup/acl.sh@48 -- # setup reset 00:05:41.980 10:06:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:41.980 10:06:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:42.546 00:05:42.546 real 0m1.457s 00:05:42.546 user 0m0.672s 00:05:42.546 sys 0m0.783s 00:05:42.546 10:06:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.546 ************************************ 00:05:42.546 END TEST allowed 00:05:42.546 ************************************ 00:05:42.546 10:06:01 -- common/autotest_common.sh@10 -- # set +x 00:05:42.546 00:05:42.546 real 0m4.062s 00:05:42.546 user 0m1.838s 00:05:42.546 sys 0m2.211s 00:05:42.546 10:06:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.546 10:06:02 -- common/autotest_common.sh@10 -- # set +x 00:05:42.546 ************************************ 00:05:42.546 END TEST acl 00:05:42.546 ************************************ 00:05:42.546 10:06:02 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:42.546 10:06:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.546 10:06:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.546 10:06:02 -- common/autotest_common.sh@10 -- # set +x 00:05:42.546 ************************************ 00:05:42.546 START TEST hugepages 00:05:42.546 ************************************ 00:05:42.546 10:06:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:42.805 * Looking for test storage... 00:05:42.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:42.805 10:06:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:42.805 10:06:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:42.805 10:06:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:42.805 10:06:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:42.805 10:06:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:42.805 10:06:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:42.805 10:06:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:42.805 10:06:02 -- scripts/common.sh@335 -- # IFS=.-: 00:05:42.805 10:06:02 -- scripts/common.sh@335 -- # read -ra ver1 00:05:42.805 10:06:02 -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.805 10:06:02 -- scripts/common.sh@336 -- # read -ra ver2 00:05:42.805 10:06:02 -- scripts/common.sh@337 -- # local 'op=<' 00:05:42.805 10:06:02 -- scripts/common.sh@339 -- # ver1_l=2 00:05:42.805 10:06:02 -- scripts/common.sh@340 -- # ver2_l=1 00:05:42.805 10:06:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:42.805 10:06:02 -- scripts/common.sh@343 -- # case "$op" in 00:05:42.805 10:06:02 -- scripts/common.sh@344 -- # : 1 00:05:42.805 10:06:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:42.805 10:06:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.805 10:06:02 -- scripts/common.sh@364 -- # decimal 1 00:05:42.805 10:06:02 -- scripts/common.sh@352 -- # local d=1 00:05:42.805 10:06:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.805 10:06:02 -- scripts/common.sh@354 -- # echo 1 00:05:42.805 10:06:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:42.805 10:06:02 -- scripts/common.sh@365 -- # decimal 2 00:05:42.805 10:06:02 -- scripts/common.sh@352 -- # local d=2 00:05:42.805 10:06:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.805 10:06:02 -- scripts/common.sh@354 -- # echo 2 00:05:42.805 10:06:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:42.805 10:06:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:42.805 10:06:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:42.805 10:06:02 -- scripts/common.sh@367 -- # return 0 00:05:42.805 10:06:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.805 10:06:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:42.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.805 --rc genhtml_branch_coverage=1 00:05:42.805 --rc genhtml_function_coverage=1 00:05:42.805 --rc genhtml_legend=1 00:05:42.805 --rc geninfo_all_blocks=1 00:05:42.805 --rc geninfo_unexecuted_blocks=1 00:05:42.805 00:05:42.805 ' 00:05:42.805 10:06:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:42.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.805 --rc genhtml_branch_coverage=1 00:05:42.805 --rc genhtml_function_coverage=1 00:05:42.805 --rc genhtml_legend=1 00:05:42.805 --rc geninfo_all_blocks=1 00:05:42.805 --rc geninfo_unexecuted_blocks=1 00:05:42.805 00:05:42.805 ' 00:05:42.805 10:06:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:42.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.805 --rc genhtml_branch_coverage=1 00:05:42.805 --rc genhtml_function_coverage=1 00:05:42.805 --rc genhtml_legend=1 00:05:42.805 --rc geninfo_all_blocks=1 00:05:42.805 --rc geninfo_unexecuted_blocks=1 00:05:42.805 00:05:42.805 ' 00:05:42.805 10:06:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:42.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.805 --rc genhtml_branch_coverage=1 00:05:42.805 --rc genhtml_function_coverage=1 00:05:42.805 --rc genhtml_legend=1 00:05:42.806 --rc geninfo_all_blocks=1 00:05:42.806 --rc geninfo_unexecuted_blocks=1 00:05:42.806 00:05:42.806 ' 00:05:42.806 10:06:02 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:42.806 10:06:02 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:42.806 10:06:02 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:42.806 10:06:02 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:42.806 10:06:02 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:42.806 10:06:02 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:42.806 10:06:02 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:42.806 10:06:02 -- setup/common.sh@18 -- # local node= 00:05:42.806 10:06:02 -- setup/common.sh@19 -- # local var val 00:05:42.806 10:06:02 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.806 10:06:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.806 10:06:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.806 10:06:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.806 10:06:02 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.806 10:06:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 4705868 kB' 'MemAvailable: 7340048 kB' 'Buffers: 2684 kB' 'Cached: 2836096 kB' 'SwapCached: 0 kB' 'Active: 494312 kB' 'Inactive: 2459116 kB' 'Active(anon): 125160 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459116 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 116284 kB' 'Mapped: 50992 kB' 'Shmem: 10512 kB' 'KReclaimable: 86128 kB' 'Slab: 189740 kB' 'SReclaimable: 86128 kB' 'SUnreclaim: 103612 kB' 'KernelStack: 6720 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411004 kB' 'Committed_AS: 317936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.806 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.806 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # continue 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.807 10:06:02 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.807 10:06:02 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:42.807 10:06:02 -- setup/common.sh@33 -- # echo 2048 00:05:42.807 10:06:02 -- setup/common.sh@33 -- # return 0 00:05:42.807 10:06:02 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:42.807 10:06:02 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:42.807 10:06:02 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:42.807 10:06:02 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:42.807 10:06:02 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:42.807 10:06:02 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:42.807 10:06:02 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:42.807 10:06:02 -- setup/hugepages.sh@207 -- # get_nodes 00:05:42.807 10:06:02 -- setup/hugepages.sh@27 -- # local node 00:05:42.807 10:06:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:42.807 10:06:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:42.807 10:06:02 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:42.807 10:06:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:42.807 10:06:02 -- setup/hugepages.sh@208 -- # clear_hp 00:05:42.807 10:06:02 -- setup/hugepages.sh@37 -- # local node hp 00:05:42.807 10:06:02 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:42.807 10:06:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:42.807 10:06:02 -- setup/hugepages.sh@41 -- # echo 0 00:05:42.807 10:06:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:42.807 10:06:02 -- setup/hugepages.sh@41 -- # echo 0 00:05:42.807 10:06:02 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:42.807 10:06:02 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:42.807 10:06:02 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:42.807 10:06:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.807 10:06:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.807 10:06:02 -- common/autotest_common.sh@10 -- # set +x 00:05:42.807 ************************************ 00:05:42.807 START TEST default_setup 00:05:42.807 ************************************ 00:05:42.807 10:06:02 -- common/autotest_common.sh@1114 -- # default_setup 00:05:42.807 10:06:02 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:42.807 10:06:02 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:42.807 10:06:02 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:42.807 10:06:02 -- setup/hugepages.sh@51 -- # shift 00:05:42.807 10:06:02 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:42.807 10:06:02 -- setup/hugepages.sh@52 -- # local node_ids 00:05:42.807 10:06:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:42.807 10:06:02 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:42.807 10:06:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:42.807 10:06:02 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:42.807 10:06:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:42.807 10:06:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:42.807 10:06:02 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:42.807 10:06:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:42.807 10:06:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:42.807 10:06:02 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:42.807 10:06:02 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:42.807 10:06:02 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:42.807 10:06:02 -- setup/hugepages.sh@73 -- # return 0 00:05:42.807 10:06:02 -- setup/hugepages.sh@137 -- # setup output 00:05:42.807 10:06:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.807 10:06:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.374 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.633 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.633 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:43.633 10:06:03 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:43.633 10:06:03 -- setup/hugepages.sh@89 -- # local node 00:05:43.633 10:06:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:43.633 10:06:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:43.633 10:06:03 -- setup/hugepages.sh@92 -- # local surp 00:05:43.633 10:06:03 -- setup/hugepages.sh@93 -- # local resv 00:05:43.633 10:06:03 -- setup/hugepages.sh@94 -- # local anon 00:05:43.633 10:06:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:43.633 10:06:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:43.633 10:06:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:43.633 10:06:03 -- setup/common.sh@18 -- # local node= 00:05:43.633 10:06:03 -- setup/common.sh@19 -- # local var val 00:05:43.633 10:06:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.633 10:06:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.633 10:06:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.633 10:06:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.633 10:06:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.633 10:06:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.633 10:06:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6819604 kB' 'MemAvailable: 9453640 kB' 'Buffers: 2684 kB' 'Cached: 2836088 kB' 'SwapCached: 0 kB' 'Active: 498200 kB' 'Inactive: 2459120 kB' 'Active(anon): 129048 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120196 kB' 'Mapped: 50924 kB' 'Shmem: 10488 kB' 'KReclaimable: 85828 kB' 'Slab: 189520 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103692 kB' 'KernelStack: 6736 kB' 'PageTables: 4572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.633 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.633 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.634 10:06:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.634 10:06:03 -- setup/common.sh@33 -- # echo 0 00:05:43.634 10:06:03 -- setup/common.sh@33 -- # return 0 00:05:43.634 10:06:03 -- setup/hugepages.sh@97 -- # anon=0 00:05:43.634 10:06:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:43.634 10:06:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.634 10:06:03 -- setup/common.sh@18 -- # local node= 00:05:43.634 10:06:03 -- setup/common.sh@19 -- # local var val 00:05:43.634 10:06:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.634 10:06:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.634 10:06:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.634 10:06:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.634 10:06:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.634 10:06:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.634 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.635 10:06:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6819604 kB' 'MemAvailable: 9453640 kB' 'Buffers: 2684 kB' 'Cached: 2836088 kB' 'SwapCached: 0 kB' 'Active: 497972 kB' 'Inactive: 2459120 kB' 'Active(anon): 128820 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119992 kB' 'Mapped: 50924 kB' 'Shmem: 10488 kB' 'KReclaimable: 85828 kB' 'Slab: 189516 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103688 kB' 'KernelStack: 6720 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.635 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.635 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.896 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.896 10:06:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.897 10:06:03 -- setup/common.sh@33 -- # echo 0 00:05:43.897 10:06:03 -- setup/common.sh@33 -- # return 0 00:05:43.897 10:06:03 -- setup/hugepages.sh@99 -- # surp=0 00:05:43.897 10:06:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:43.897 10:06:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:43.897 10:06:03 -- setup/common.sh@18 -- # local node= 00:05:43.897 10:06:03 -- setup/common.sh@19 -- # local var val 00:05:43.897 10:06:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.897 10:06:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.897 10:06:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.897 10:06:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.897 10:06:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.897 10:06:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6819604 kB' 'MemAvailable: 9453640 kB' 'Buffers: 2684 kB' 'Cached: 2836088 kB' 'SwapCached: 0 kB' 'Active: 497888 kB' 'Inactive: 2459120 kB' 'Active(anon): 128736 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119908 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85828 kB' 'Slab: 189520 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103692 kB' 'KernelStack: 6672 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.897 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.897 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.898 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.898 10:06:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.899 10:06:03 -- setup/common.sh@33 -- # echo 0 00:05:43.899 10:06:03 -- setup/common.sh@33 -- # return 0 00:05:43.899 10:06:03 -- setup/hugepages.sh@100 -- # resv=0 00:05:43.899 nr_hugepages=1024 00:05:43.899 10:06:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:43.899 resv_hugepages=0 00:05:43.899 10:06:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:43.899 surplus_hugepages=0 00:05:43.899 10:06:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:43.899 anon_hugepages=0 00:05:43.899 10:06:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:43.899 10:06:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.899 10:06:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:43.899 10:06:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:43.899 10:06:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:43.899 10:06:03 -- setup/common.sh@18 -- # local node= 00:05:43.899 10:06:03 -- setup/common.sh@19 -- # local var val 00:05:43.899 10:06:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.899 10:06:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.899 10:06:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.899 10:06:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.899 10:06:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.899 10:06:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6819604 kB' 'MemAvailable: 9453640 kB' 'Buffers: 2684 kB' 'Cached: 2836088 kB' 'SwapCached: 0 kB' 'Active: 497904 kB' 'Inactive: 2459120 kB' 'Active(anon): 128752 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119916 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85828 kB' 'Slab: 189492 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103664 kB' 'KernelStack: 6688 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.899 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.899 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.900 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.900 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.901 10:06:03 -- setup/common.sh@33 -- # echo 1024 00:05:43.901 10:06:03 -- setup/common.sh@33 -- # return 0 00:05:43.901 10:06:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.901 10:06:03 -- setup/hugepages.sh@112 -- # get_nodes 00:05:43.901 10:06:03 -- setup/hugepages.sh@27 -- # local node 00:05:43.901 10:06:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:43.901 10:06:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:43.901 10:06:03 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:43.901 10:06:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:43.901 10:06:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:43.901 10:06:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:43.901 10:06:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:43.901 10:06:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.901 10:06:03 -- setup/common.sh@18 -- # local node=0 00:05:43.901 10:06:03 -- setup/common.sh@19 -- # local var val 00:05:43.901 10:06:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.901 10:06:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.901 10:06:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:43.901 10:06:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:43.901 10:06:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.901 10:06:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6819604 kB' 'MemUsed: 5419500 kB' 'SwapCached: 0 kB' 'Active: 497876 kB' 'Inactive: 2459120 kB' 'Active(anon): 128724 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459120 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 2838772 kB' 'Mapped: 50792 kB' 'AnonPages: 119892 kB' 'Shmem: 10488 kB' 'KernelStack: 6688 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85828 kB' 'Slab: 189492 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.901 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.901 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # continue 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.902 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.902 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.902 10:06:03 -- setup/common.sh@33 -- # echo 0 00:05:43.902 10:06:03 -- setup/common.sh@33 -- # return 0 00:05:43.902 10:06:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:43.903 10:06:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:43.903 10:06:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:43.903 node0=1024 expecting 1024 00:05:43.903 10:06:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:43.903 10:06:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:43.903 10:06:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:43.903 00:05:43.903 real 0m0.974s 00:05:43.903 user 0m0.462s 00:05:43.903 sys 0m0.457s 00:05:43.903 10:06:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.903 ************************************ 00:05:43.903 END TEST default_setup 00:05:43.903 ************************************ 00:05:43.903 10:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:43.903 10:06:03 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:43.903 10:06:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.903 10:06:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.903 10:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:43.903 ************************************ 00:05:43.903 START TEST per_node_1G_alloc 00:05:43.903 ************************************ 00:05:43.903 10:06:03 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:43.903 10:06:03 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:43.903 10:06:03 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:43.903 10:06:03 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:43.903 10:06:03 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:43.903 10:06:03 -- setup/hugepages.sh@51 -- # shift 00:05:43.903 10:06:03 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:43.903 10:06:03 -- setup/hugepages.sh@52 -- # local node_ids 00:05:43.903 10:06:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:43.903 10:06:03 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:43.903 10:06:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:43.903 10:06:03 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:43.903 10:06:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:43.903 10:06:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:43.903 10:06:03 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:43.903 10:06:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:43.903 10:06:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:43.903 10:06:03 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:43.903 10:06:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:43.903 10:06:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:43.903 10:06:03 -- setup/hugepages.sh@73 -- # return 0 00:05:43.903 10:06:03 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:43.903 10:06:03 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:43.903 10:06:03 -- setup/hugepages.sh@146 -- # setup output 00:05:43.903 10:06:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.903 10:06:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.162 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.162 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.162 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.162 10:06:03 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:44.162 10:06:03 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:44.162 10:06:03 -- setup/hugepages.sh@89 -- # local node 00:05:44.162 10:06:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:44.162 10:06:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:44.162 10:06:03 -- setup/hugepages.sh@92 -- # local surp 00:05:44.162 10:06:03 -- setup/hugepages.sh@93 -- # local resv 00:05:44.162 10:06:03 -- setup/hugepages.sh@94 -- # local anon 00:05:44.162 10:06:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:44.162 10:06:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:44.162 10:06:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:44.162 10:06:03 -- setup/common.sh@18 -- # local node= 00:05:44.162 10:06:03 -- setup/common.sh@19 -- # local var val 00:05:44.162 10:06:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.162 10:06:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.162 10:06:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.162 10:06:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.162 10:06:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.162 10:06:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.162 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.162 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.162 10:06:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7872372 kB' 'MemAvailable: 10506420 kB' 'Buffers: 2684 kB' 'Cached: 2836088 kB' 'SwapCached: 0 kB' 'Active: 498532 kB' 'Inactive: 2459132 kB' 'Active(anon): 129380 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459132 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120240 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85828 kB' 'Slab: 189580 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103752 kB' 'KernelStack: 6688 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:44.162 10:06:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.162 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.162 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.162 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.162 10:06:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.162 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.162 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.162 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.162 10:06:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.162 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.162 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.162 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.162 10:06:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.162 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.162 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.162 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.162 10:06:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.162 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.162 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.162 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.162 10:06:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.162 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.162 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.163 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.163 10:06:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.163 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.163 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.163 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.163 10:06:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.163 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.163 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.163 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.163 10:06:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.163 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.163 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.163 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.163 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.163 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.163 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.163 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.163 10:06:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.163 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.163 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.163 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.163 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.163 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.163 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.163 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.163 10:06:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.163 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.163 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.163 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.163 10:06:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.163 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.163 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.424 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.424 10:06:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.424 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.424 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.424 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.424 10:06:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.424 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.424 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.424 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.424 10:06:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.424 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.424 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.424 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.424 10:06:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.424 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.424 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.424 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.424 10:06:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.424 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.424 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.424 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.424 10:06:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.424 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.424 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.424 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.424 10:06:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.424 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.425 10:06:03 -- setup/common.sh@33 -- # echo 0 00:05:44.425 10:06:03 -- setup/common.sh@33 -- # return 0 00:05:44.425 10:06:03 -- setup/hugepages.sh@97 -- # anon=0 00:05:44.425 10:06:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:44.425 10:06:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.425 10:06:03 -- setup/common.sh@18 -- # local node= 00:05:44.425 10:06:03 -- setup/common.sh@19 -- # local var val 00:05:44.425 10:06:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.425 10:06:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.425 10:06:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.425 10:06:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.425 10:06:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.425 10:06:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7872372 kB' 'MemAvailable: 10506420 kB' 'Buffers: 2684 kB' 'Cached: 2836088 kB' 'SwapCached: 0 kB' 'Active: 498168 kB' 'Inactive: 2459132 kB' 'Active(anon): 129016 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459132 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120108 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85828 kB' 'Slab: 189584 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103756 kB' 'KernelStack: 6704 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.425 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.425 10:06:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.426 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.426 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.427 10:06:03 -- setup/common.sh@33 -- # echo 0 00:05:44.427 10:06:03 -- setup/common.sh@33 -- # return 0 00:05:44.427 10:06:03 -- setup/hugepages.sh@99 -- # surp=0 00:05:44.427 10:06:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:44.427 10:06:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:44.427 10:06:03 -- setup/common.sh@18 -- # local node= 00:05:44.427 10:06:03 -- setup/common.sh@19 -- # local var val 00:05:44.427 10:06:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.427 10:06:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.427 10:06:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.427 10:06:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.427 10:06:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.427 10:06:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7872372 kB' 'MemAvailable: 10506420 kB' 'Buffers: 2684 kB' 'Cached: 2836088 kB' 'SwapCached: 0 kB' 'Active: 497840 kB' 'Inactive: 2459132 kB' 'Active(anon): 128688 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459132 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119820 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85828 kB' 'Slab: 189584 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103756 kB' 'KernelStack: 6688 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.427 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.427 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.428 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.428 10:06:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.429 10:06:03 -- setup/common.sh@33 -- # echo 0 00:05:44.429 10:06:03 -- setup/common.sh@33 -- # return 0 00:05:44.429 10:06:03 -- setup/hugepages.sh@100 -- # resv=0 00:05:44.429 nr_hugepages=512 00:05:44.429 10:06:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:44.429 resv_hugepages=0 00:05:44.429 10:06:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:44.429 surplus_hugepages=0 00:05:44.429 10:06:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:44.429 anon_hugepages=0 00:05:44.429 10:06:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:44.429 10:06:03 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:44.429 10:06:03 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:44.429 10:06:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:44.429 10:06:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:44.429 10:06:03 -- setup/common.sh@18 -- # local node= 00:05:44.429 10:06:03 -- setup/common.sh@19 -- # local var val 00:05:44.429 10:06:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.429 10:06:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.429 10:06:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.429 10:06:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.429 10:06:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.429 10:06:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7872372 kB' 'MemAvailable: 10506420 kB' 'Buffers: 2684 kB' 'Cached: 2836088 kB' 'SwapCached: 0 kB' 'Active: 498040 kB' 'Inactive: 2459132 kB' 'Active(anon): 128888 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459132 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120020 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85828 kB' 'Slab: 189584 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103756 kB' 'KernelStack: 6672 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.429 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.429 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.430 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.430 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.431 10:06:03 -- setup/common.sh@33 -- # echo 512 00:05:44.431 10:06:03 -- setup/common.sh@33 -- # return 0 00:05:44.431 10:06:03 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:44.431 10:06:03 -- setup/hugepages.sh@112 -- # get_nodes 00:05:44.431 10:06:03 -- setup/hugepages.sh@27 -- # local node 00:05:44.431 10:06:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:44.431 10:06:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:44.431 10:06:03 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:44.431 10:06:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:44.431 10:06:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:44.431 10:06:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:44.431 10:06:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:44.431 10:06:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.431 10:06:03 -- setup/common.sh@18 -- # local node=0 00:05:44.431 10:06:03 -- setup/common.sh@19 -- # local var val 00:05:44.431 10:06:03 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.431 10:06:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.431 10:06:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:44.431 10:06:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:44.431 10:06:03 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.431 10:06:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7872372 kB' 'MemUsed: 4366732 kB' 'SwapCached: 0 kB' 'Active: 498068 kB' 'Inactive: 2459132 kB' 'Active(anon): 128916 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459132 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 2838772 kB' 'Mapped: 50792 kB' 'AnonPages: 120004 kB' 'Shmem: 10488 kB' 'KernelStack: 6656 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85828 kB' 'Slab: 189568 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.431 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.431 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # continue 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.432 10:06:03 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.432 10:06:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.432 10:06:03 -- setup/common.sh@33 -- # echo 0 00:05:44.432 10:06:03 -- setup/common.sh@33 -- # return 0 00:05:44.432 10:06:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:44.432 10:06:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:44.432 10:06:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:44.432 10:06:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:44.432 node0=512 expecting 512 00:05:44.432 10:06:03 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:44.432 10:06:03 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:44.432 00:05:44.432 real 0m0.532s 00:05:44.432 user 0m0.236s 00:05:44.432 sys 0m0.304s 00:05:44.432 10:06:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.432 10:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:44.432 ************************************ 00:05:44.432 END TEST per_node_1G_alloc 00:05:44.432 ************************************ 00:05:44.432 10:06:03 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:44.432 10:06:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.432 10:06:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.432 10:06:03 -- common/autotest_common.sh@10 -- # set +x 00:05:44.432 ************************************ 00:05:44.432 START TEST even_2G_alloc 00:05:44.432 ************************************ 00:05:44.432 10:06:03 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:44.432 10:06:03 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:44.432 10:06:03 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:44.432 10:06:03 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:44.432 10:06:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:44.432 10:06:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:44.432 10:06:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:44.432 10:06:03 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:44.432 10:06:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:44.432 10:06:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:44.432 10:06:03 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:44.432 10:06:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:44.432 10:06:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:44.432 10:06:03 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:44.432 10:06:03 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:44.432 10:06:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:44.432 10:06:03 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:44.432 10:06:03 -- setup/hugepages.sh@83 -- # : 0 00:05:44.432 10:06:03 -- setup/hugepages.sh@84 -- # : 0 00:05:44.432 10:06:03 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:44.432 10:06:03 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:44.432 10:06:03 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:44.432 10:06:03 -- setup/hugepages.sh@153 -- # setup output 00:05:44.432 10:06:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.432 10:06:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.691 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.691 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.691 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.952 10:06:04 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:44.952 10:06:04 -- setup/hugepages.sh@89 -- # local node 00:05:44.952 10:06:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:44.953 10:06:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:44.953 10:06:04 -- setup/hugepages.sh@92 -- # local surp 00:05:44.953 10:06:04 -- setup/hugepages.sh@93 -- # local resv 00:05:44.953 10:06:04 -- setup/hugepages.sh@94 -- # local anon 00:05:44.953 10:06:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:44.953 10:06:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:44.953 10:06:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:44.953 10:06:04 -- setup/common.sh@18 -- # local node= 00:05:44.953 10:06:04 -- setup/common.sh@19 -- # local var val 00:05:44.953 10:06:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.953 10:06:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.953 10:06:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.953 10:06:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.953 10:06:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.953 10:06:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6821500 kB' 'MemAvailable: 9455552 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498488 kB' 'Inactive: 2459136 kB' 'Active(anon): 129336 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120692 kB' 'Mapped: 50904 kB' 'Shmem: 10488 kB' 'KReclaimable: 85828 kB' 'Slab: 189580 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103752 kB' 'KernelStack: 6708 kB' 'PageTables: 4576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.953 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.953 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.954 10:06:04 -- setup/common.sh@33 -- # echo 0 00:05:44.954 10:06:04 -- setup/common.sh@33 -- # return 0 00:05:44.954 10:06:04 -- setup/hugepages.sh@97 -- # anon=0 00:05:44.954 10:06:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:44.954 10:06:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.954 10:06:04 -- setup/common.sh@18 -- # local node= 00:05:44.954 10:06:04 -- setup/common.sh@19 -- # local var val 00:05:44.954 10:06:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.954 10:06:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.954 10:06:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.954 10:06:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.954 10:06:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.954 10:06:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6821760 kB' 'MemAvailable: 9455812 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 497940 kB' 'Inactive: 2459136 kB' 'Active(anon): 128788 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 119916 kB' 'Mapped: 50776 kB' 'Shmem: 10488 kB' 'KReclaimable: 85828 kB' 'Slab: 189612 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103784 kB' 'KernelStack: 6720 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.954 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.954 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.955 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.955 10:06:04 -- setup/common.sh@33 -- # echo 0 00:05:44.955 10:06:04 -- setup/common.sh@33 -- # return 0 00:05:44.955 10:06:04 -- setup/hugepages.sh@99 -- # surp=0 00:05:44.955 10:06:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:44.955 10:06:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:44.955 10:06:04 -- setup/common.sh@18 -- # local node= 00:05:44.955 10:06:04 -- setup/common.sh@19 -- # local var val 00:05:44.955 10:06:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.955 10:06:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.955 10:06:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.955 10:06:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.955 10:06:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.955 10:06:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.955 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6821832 kB' 'MemAvailable: 9455884 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498156 kB' 'Inactive: 2459136 kB' 'Active(anon): 129004 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120140 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85828 kB' 'Slab: 189612 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103784 kB' 'KernelStack: 6704 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.956 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.956 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.957 10:06:04 -- setup/common.sh@33 -- # echo 0 00:05:44.957 10:06:04 -- setup/common.sh@33 -- # return 0 00:05:44.957 10:06:04 -- setup/hugepages.sh@100 -- # resv=0 00:05:44.957 nr_hugepages=1024 00:05:44.957 10:06:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:44.957 resv_hugepages=0 00:05:44.957 10:06:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:44.957 10:06:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:44.957 surplus_hugepages=0 00:05:44.957 anon_hugepages=0 00:05:44.957 10:06:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:44.957 10:06:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:44.957 10:06:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:44.957 10:06:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:44.957 10:06:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:44.957 10:06:04 -- setup/common.sh@18 -- # local node= 00:05:44.957 10:06:04 -- setup/common.sh@19 -- # local var val 00:05:44.957 10:06:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.957 10:06:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.957 10:06:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.957 10:06:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.957 10:06:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.957 10:06:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6821832 kB' 'MemAvailable: 9455884 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498088 kB' 'Inactive: 2459136 kB' 'Active(anon): 128936 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120016 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85828 kB' 'Slab: 189596 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103768 kB' 'KernelStack: 6688 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.957 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.957 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.958 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.958 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.958 10:06:04 -- setup/common.sh@33 -- # echo 1024 00:05:44.959 10:06:04 -- setup/common.sh@33 -- # return 0 00:05:44.959 10:06:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:44.959 10:06:04 -- setup/hugepages.sh@112 -- # get_nodes 00:05:44.959 10:06:04 -- setup/hugepages.sh@27 -- # local node 00:05:44.959 10:06:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:44.959 10:06:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:44.959 10:06:04 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:44.959 10:06:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:44.959 10:06:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:44.959 10:06:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:44.959 10:06:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:44.959 10:06:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.959 10:06:04 -- setup/common.sh@18 -- # local node=0 00:05:44.959 10:06:04 -- setup/common.sh@19 -- # local var val 00:05:44.959 10:06:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.959 10:06:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.959 10:06:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:44.959 10:06:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:44.959 10:06:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.959 10:06:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6821832 kB' 'MemUsed: 5417272 kB' 'SwapCached: 0 kB' 'Active: 498112 kB' 'Inactive: 2459136 kB' 'Active(anon): 128960 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'FilePages: 2838776 kB' 'Mapped: 50792 kB' 'AnonPages: 120044 kB' 'Shmem: 10488 kB' 'KernelStack: 6688 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85828 kB' 'Slab: 189592 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.959 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.959 10:06:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.960 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.960 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.960 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.960 10:06:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.960 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.960 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.960 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.960 10:06:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.960 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.960 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.960 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.960 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.960 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.960 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.960 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.960 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.960 10:06:04 -- setup/common.sh@32 -- # continue 00:05:44.960 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.960 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.960 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.960 10:06:04 -- setup/common.sh@33 -- # echo 0 00:05:44.960 10:06:04 -- setup/common.sh@33 -- # return 0 00:05:44.960 10:06:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:44.960 10:06:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:44.960 10:06:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:44.960 10:06:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:44.960 node0=1024 expecting 1024 00:05:44.960 10:06:04 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:44.960 10:06:04 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:44.960 00:05:44.960 real 0m0.545s 00:05:44.960 user 0m0.286s 00:05:44.960 sys 0m0.290s 00:05:44.960 10:06:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.960 ************************************ 00:05:44.960 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:44.960 END TEST even_2G_alloc 00:05:44.960 ************************************ 00:05:44.960 10:06:04 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:44.960 10:06:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.960 10:06:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.960 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:44.960 ************************************ 00:05:44.960 START TEST odd_alloc 00:05:44.960 ************************************ 00:05:44.960 10:06:04 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:44.960 10:06:04 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:44.960 10:06:04 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:44.960 10:06:04 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:44.960 10:06:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:44.960 10:06:04 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:44.960 10:06:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:44.960 10:06:04 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:44.960 10:06:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:44.960 10:06:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:44.960 10:06:04 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:44.960 10:06:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:44.960 10:06:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:44.960 10:06:04 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:44.960 10:06:04 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:44.960 10:06:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:44.960 10:06:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:44.960 10:06:04 -- setup/hugepages.sh@83 -- # : 0 00:05:44.960 10:06:04 -- setup/hugepages.sh@84 -- # : 0 00:05:44.960 10:06:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:44.960 10:06:04 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:44.960 10:06:04 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:44.960 10:06:04 -- setup/hugepages.sh@160 -- # setup output 00:05:44.960 10:06:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.960 10:06:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:45.479 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.479 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:45.479 10:06:04 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:45.479 10:06:04 -- setup/hugepages.sh@89 -- # local node 00:05:45.479 10:06:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:45.479 10:06:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:45.479 10:06:04 -- setup/hugepages.sh@92 -- # local surp 00:05:45.479 10:06:04 -- setup/hugepages.sh@93 -- # local resv 00:05:45.479 10:06:04 -- setup/hugepages.sh@94 -- # local anon 00:05:45.479 10:06:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:45.479 10:06:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:45.479 10:06:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:45.479 10:06:04 -- setup/common.sh@18 -- # local node= 00:05:45.479 10:06:04 -- setup/common.sh@19 -- # local var val 00:05:45.479 10:06:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.479 10:06:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.479 10:06:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.479 10:06:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.479 10:06:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.479 10:06:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.479 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.479 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6818428 kB' 'MemAvailable: 9452480 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498532 kB' 'Inactive: 2459136 kB' 'Active(anon): 129380 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120512 kB' 'Mapped: 50912 kB' 'Shmem: 10488 kB' 'KReclaimable: 85828 kB' 'Slab: 189568 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103740 kB' 'KernelStack: 6712 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.480 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.480 10:06:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:45.480 10:06:04 -- setup/common.sh@33 -- # echo 0 00:05:45.480 10:06:04 -- setup/common.sh@33 -- # return 0 00:05:45.480 10:06:04 -- setup/hugepages.sh@97 -- # anon=0 00:05:45.480 10:06:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:45.480 10:06:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.480 10:06:04 -- setup/common.sh@18 -- # local node= 00:05:45.480 10:06:04 -- setup/common.sh@19 -- # local var val 00:05:45.480 10:06:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.480 10:06:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.480 10:06:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.480 10:06:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.480 10:06:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.481 10:06:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6818428 kB' 'MemAvailable: 9452480 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498156 kB' 'Inactive: 2459136 kB' 'Active(anon): 129004 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 120144 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85828 kB' 'Slab: 189600 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103772 kB' 'KernelStack: 6704 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.481 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.481 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.482 10:06:04 -- setup/common.sh@33 -- # echo 0 00:05:45.482 10:06:04 -- setup/common.sh@33 -- # return 0 00:05:45.482 10:06:04 -- setup/hugepages.sh@99 -- # surp=0 00:05:45.482 10:06:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:45.482 10:06:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:45.482 10:06:04 -- setup/common.sh@18 -- # local node= 00:05:45.482 10:06:04 -- setup/common.sh@19 -- # local var val 00:05:45.482 10:06:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.482 10:06:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.482 10:06:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.482 10:06:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.482 10:06:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.482 10:06:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6819108 kB' 'MemAvailable: 9453160 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498340 kB' 'Inactive: 2459136 kB' 'Active(anon): 129188 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120268 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85828 kB' 'Slab: 189600 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103772 kB' 'KernelStack: 6688 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.482 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.482 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:45.483 10:06:04 -- setup/common.sh@33 -- # echo 0 00:05:45.483 10:06:04 -- setup/common.sh@33 -- # return 0 00:05:45.483 10:06:04 -- setup/hugepages.sh@100 -- # resv=0 00:05:45.483 10:06:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:45.483 nr_hugepages=1025 00:05:45.483 resv_hugepages=0 00:05:45.483 10:06:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:45.483 surplus_hugepages=0 00:05:45.483 10:06:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:45.483 anon_hugepages=0 00:05:45.483 10:06:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:45.483 10:06:04 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:45.483 10:06:04 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:45.483 10:06:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:45.483 10:06:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:45.483 10:06:04 -- setup/common.sh@18 -- # local node= 00:05:45.483 10:06:04 -- setup/common.sh@19 -- # local var val 00:05:45.483 10:06:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.483 10:06:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.483 10:06:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:45.483 10:06:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:45.483 10:06:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.483 10:06:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6819108 kB' 'MemAvailable: 9453160 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498112 kB' 'Inactive: 2459136 kB' 'Active(anon): 128960 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120044 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85828 kB' 'Slab: 189588 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103760 kB' 'KernelStack: 6688 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458556 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.483 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.483 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.484 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.484 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:45.484 10:06:04 -- setup/common.sh@33 -- # echo 1025 00:05:45.484 10:06:04 -- setup/common.sh@33 -- # return 0 00:05:45.484 10:06:04 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:45.484 10:06:04 -- setup/hugepages.sh@112 -- # get_nodes 00:05:45.484 10:06:04 -- setup/hugepages.sh@27 -- # local node 00:05:45.484 10:06:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:45.484 10:06:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:45.484 10:06:04 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:45.484 10:06:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:45.485 10:06:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:45.485 10:06:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:45.485 10:06:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:45.485 10:06:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:45.485 10:06:04 -- setup/common.sh@18 -- # local node=0 00:05:45.485 10:06:04 -- setup/common.sh@19 -- # local var val 00:05:45.485 10:06:04 -- setup/common.sh@20 -- # local mem_f mem 00:05:45.485 10:06:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:45.485 10:06:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:45.485 10:06:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:45.485 10:06:04 -- setup/common.sh@28 -- # mapfile -t mem 00:05:45.485 10:06:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6819108 kB' 'MemUsed: 5419996 kB' 'SwapCached: 0 kB' 'Active: 498112 kB' 'Inactive: 2459136 kB' 'Active(anon): 128960 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2838776 kB' 'Mapped: 50792 kB' 'AnonPages: 120044 kB' 'Shmem: 10488 kB' 'KernelStack: 6688 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85828 kB' 'Slab: 189588 kB' 'SReclaimable: 85828 kB' 'SUnreclaim: 103760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # continue 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # IFS=': ' 00:05:45.485 10:06:04 -- setup/common.sh@31 -- # read -r var val _ 00:05:45.485 10:06:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:45.485 10:06:04 -- setup/common.sh@33 -- # echo 0 00:05:45.485 10:06:04 -- setup/common.sh@33 -- # return 0 00:05:45.485 10:06:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:45.485 10:06:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:45.485 10:06:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:45.485 10:06:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:45.485 node0=1025 expecting 1025 00:05:45.485 10:06:04 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:45.485 10:06:04 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:45.485 00:05:45.485 real 0m0.487s 00:05:45.485 user 0m0.245s 00:05:45.485 sys 0m0.262s 00:05:45.485 10:06:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.485 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:45.485 ************************************ 00:05:45.485 END TEST odd_alloc 00:05:45.485 ************************************ 00:05:45.486 10:06:04 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:45.486 10:06:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.486 10:06:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.486 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:05:45.486 ************************************ 00:05:45.486 START TEST custom_alloc 00:05:45.486 ************************************ 00:05:45.486 10:06:05 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:45.486 10:06:05 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:45.486 10:06:05 -- setup/hugepages.sh@169 -- # local node 00:05:45.486 10:06:05 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:45.486 10:06:05 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:45.486 10:06:05 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:45.486 10:06:05 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:45.486 10:06:05 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:45.486 10:06:05 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:45.486 10:06:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:45.486 10:06:05 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:45.486 10:06:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:45.486 10:06:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:45.486 10:06:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:45.486 10:06:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:45.486 10:06:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:45.486 10:06:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:45.486 10:06:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:45.486 10:06:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:45.486 10:06:05 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:45.486 10:06:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:45.486 10:06:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:45.486 10:06:05 -- setup/hugepages.sh@83 -- # : 0 00:05:45.486 10:06:05 -- setup/hugepages.sh@84 -- # : 0 00:05:45.486 10:06:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:45.486 10:06:05 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:45.486 10:06:05 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:45.486 10:06:05 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:45.486 10:06:05 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:45.486 10:06:05 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:45.486 10:06:05 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:45.486 10:06:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:45.486 10:06:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:45.486 10:06:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:45.486 10:06:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:45.486 10:06:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:45.486 10:06:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:45.486 10:06:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:45.486 10:06:05 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:45.486 10:06:05 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:45.486 10:06:05 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:45.486 10:06:05 -- setup/hugepages.sh@78 -- # return 0 00:05:45.486 10:06:05 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:45.486 10:06:05 -- setup/hugepages.sh@187 -- # setup output 00:05:45.486 10:06:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.486 10:06:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.005 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.005 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.005 10:06:05 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:46.005 10:06:05 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:46.005 10:06:05 -- setup/hugepages.sh@89 -- # local node 00:05:46.005 10:06:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.005 10:06:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.005 10:06:05 -- setup/hugepages.sh@92 -- # local surp 00:05:46.005 10:06:05 -- setup/hugepages.sh@93 -- # local resv 00:05:46.005 10:06:05 -- setup/hugepages.sh@94 -- # local anon 00:05:46.005 10:06:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.005 10:06:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.005 10:06:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.005 10:06:05 -- setup/common.sh@18 -- # local node= 00:05:46.005 10:06:05 -- setup/common.sh@19 -- # local var val 00:05:46.005 10:06:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.005 10:06:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.005 10:06:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.005 10:06:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.005 10:06:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.005 10:06:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.005 10:06:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7866724 kB' 'MemAvailable: 10500784 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498228 kB' 'Inactive: 2459136 kB' 'Active(anon): 129076 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120144 kB' 'Mapped: 50808 kB' 'Shmem: 10488 kB' 'KReclaimable: 85844 kB' 'Slab: 189596 kB' 'SReclaimable: 85844 kB' 'SUnreclaim: 103752 kB' 'KernelStack: 6680 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.005 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.005 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.006 10:06:05 -- setup/common.sh@33 -- # echo 0 00:05:46.006 10:06:05 -- setup/common.sh@33 -- # return 0 00:05:46.006 10:06:05 -- setup/hugepages.sh@97 -- # anon=0 00:05:46.006 10:06:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:46.006 10:06:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.006 10:06:05 -- setup/common.sh@18 -- # local node= 00:05:46.006 10:06:05 -- setup/common.sh@19 -- # local var val 00:05:46.006 10:06:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.006 10:06:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.006 10:06:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.006 10:06:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.006 10:06:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.006 10:06:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7866724 kB' 'MemAvailable: 10500784 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498164 kB' 'Inactive: 2459136 kB' 'Active(anon): 129012 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120100 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85844 kB' 'Slab: 189624 kB' 'SReclaimable: 85844 kB' 'SUnreclaim: 103780 kB' 'KernelStack: 6688 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.006 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.006 10:06:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.007 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.007 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.008 10:06:05 -- setup/common.sh@33 -- # echo 0 00:05:46.008 10:06:05 -- setup/common.sh@33 -- # return 0 00:05:46.008 10:06:05 -- setup/hugepages.sh@99 -- # surp=0 00:05:46.008 10:06:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:46.008 10:06:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:46.008 10:06:05 -- setup/common.sh@18 -- # local node= 00:05:46.008 10:06:05 -- setup/common.sh@19 -- # local var val 00:05:46.008 10:06:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.008 10:06:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.008 10:06:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.008 10:06:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.008 10:06:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.008 10:06:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7866724 kB' 'MemAvailable: 10500784 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498116 kB' 'Inactive: 2459136 kB' 'Active(anon): 128964 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120052 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85844 kB' 'Slab: 189616 kB' 'SReclaimable: 85844 kB' 'SUnreclaim: 103772 kB' 'KernelStack: 6672 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.008 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.008 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.009 10:06:05 -- setup/common.sh@33 -- # echo 0 00:05:46.009 10:06:05 -- setup/common.sh@33 -- # return 0 00:05:46.009 10:06:05 -- setup/hugepages.sh@100 -- # resv=0 00:05:46.009 nr_hugepages=512 00:05:46.009 10:06:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:46.009 resv_hugepages=0 00:05:46.009 10:06:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:46.009 surplus_hugepages=0 00:05:46.009 10:06:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:46.009 anon_hugepages=0 00:05:46.009 10:06:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:46.009 10:06:05 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:46.009 10:06:05 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:46.009 10:06:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:46.009 10:06:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:46.009 10:06:05 -- setup/common.sh@18 -- # local node= 00:05:46.009 10:06:05 -- setup/common.sh@19 -- # local var val 00:05:46.009 10:06:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.009 10:06:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.009 10:06:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.009 10:06:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.009 10:06:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.009 10:06:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7866724 kB' 'MemAvailable: 10500784 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 497876 kB' 'Inactive: 2459136 kB' 'Active(anon): 128724 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120064 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85844 kB' 'Slab: 189616 kB' 'SReclaimable: 85844 kB' 'SUnreclaim: 103772 kB' 'KernelStack: 6688 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983868 kB' 'Committed_AS: 322344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.009 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.009 10:06:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.010 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.010 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.011 10:06:05 -- setup/common.sh@33 -- # echo 512 00:05:46.011 10:06:05 -- setup/common.sh@33 -- # return 0 00:05:46.011 10:06:05 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:46.011 10:06:05 -- setup/hugepages.sh@112 -- # get_nodes 00:05:46.011 10:06:05 -- setup/hugepages.sh@27 -- # local node 00:05:46.011 10:06:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:46.011 10:06:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:46.011 10:06:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:46.011 10:06:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:46.011 10:06:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:46.011 10:06:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:46.011 10:06:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:46.011 10:06:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.011 10:06:05 -- setup/common.sh@18 -- # local node=0 00:05:46.011 10:06:05 -- setup/common.sh@19 -- # local var val 00:05:46.011 10:06:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.011 10:06:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.011 10:06:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:46.011 10:06:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:46.011 10:06:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.011 10:06:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 7866724 kB' 'MemUsed: 4372380 kB' 'SwapCached: 0 kB' 'Active: 498116 kB' 'Inactive: 2459136 kB' 'Active(anon): 128964 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2838776 kB' 'Mapped: 50792 kB' 'AnonPages: 120048 kB' 'Shmem: 10488 kB' 'KernelStack: 6688 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85844 kB' 'Slab: 189608 kB' 'SReclaimable: 85844 kB' 'SUnreclaim: 103764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.011 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.011 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.012 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.012 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.012 10:06:05 -- setup/common.sh@33 -- # echo 0 00:05:46.012 10:06:05 -- setup/common.sh@33 -- # return 0 00:05:46.012 10:06:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:46.012 10:06:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:46.012 10:06:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:46.012 10:06:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:46.012 node0=512 expecting 512 00:05:46.012 10:06:05 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:46.012 10:06:05 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:46.012 00:05:46.012 real 0m0.496s 00:05:46.012 user 0m0.253s 00:05:46.012 sys 0m0.270s 00:05:46.012 10:06:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.012 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:05:46.012 ************************************ 00:05:46.012 END TEST custom_alloc 00:05:46.012 ************************************ 00:05:46.012 10:06:05 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:46.012 10:06:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.012 10:06:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.012 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:05:46.012 ************************************ 00:05:46.012 START TEST no_shrink_alloc 00:05:46.012 ************************************ 00:05:46.012 10:06:05 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:46.012 10:06:05 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:46.012 10:06:05 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:46.012 10:06:05 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:46.012 10:06:05 -- setup/hugepages.sh@51 -- # shift 00:05:46.012 10:06:05 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:46.012 10:06:05 -- setup/hugepages.sh@52 -- # local node_ids 00:05:46.012 10:06:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:46.012 10:06:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:46.012 10:06:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:46.012 10:06:05 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:46.012 10:06:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:46.012 10:06:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:46.012 10:06:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:46.012 10:06:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:46.012 10:06:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:46.012 10:06:05 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:46.012 10:06:05 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:46.012 10:06:05 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:46.012 10:06:05 -- setup/hugepages.sh@73 -- # return 0 00:05:46.012 10:06:05 -- setup/hugepages.sh@198 -- # setup output 00:05:46.012 10:06:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.012 10:06:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.271 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.533 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.533 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.533 10:06:05 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:46.533 10:06:05 -- setup/hugepages.sh@89 -- # local node 00:05:46.533 10:06:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.533 10:06:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.533 10:06:05 -- setup/hugepages.sh@92 -- # local surp 00:05:46.533 10:06:05 -- setup/hugepages.sh@93 -- # local resv 00:05:46.533 10:06:05 -- setup/hugepages.sh@94 -- # local anon 00:05:46.533 10:06:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.533 10:06:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.533 10:06:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.533 10:06:05 -- setup/common.sh@18 -- # local node= 00:05:46.533 10:06:05 -- setup/common.sh@19 -- # local var val 00:05:46.533 10:06:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.533 10:06:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.533 10:06:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.533 10:06:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.533 10:06:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.533 10:06:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.533 10:06:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6825336 kB' 'MemAvailable: 9459396 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498692 kB' 'Inactive: 2459136 kB' 'Active(anon): 129540 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120680 kB' 'Mapped: 50980 kB' 'Shmem: 10488 kB' 'KReclaimable: 85844 kB' 'Slab: 189500 kB' 'SReclaimable: 85844 kB' 'SUnreclaim: 103656 kB' 'KernelStack: 6712 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.533 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.533 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.534 10:06:05 -- setup/common.sh@33 -- # echo 0 00:05:46.534 10:06:05 -- setup/common.sh@33 -- # return 0 00:05:46.534 10:06:05 -- setup/hugepages.sh@97 -- # anon=0 00:05:46.534 10:06:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:46.534 10:06:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.534 10:06:05 -- setup/common.sh@18 -- # local node= 00:05:46.534 10:06:05 -- setup/common.sh@19 -- # local var val 00:05:46.534 10:06:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.534 10:06:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.534 10:06:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.534 10:06:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.534 10:06:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.534 10:06:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6825336 kB' 'MemAvailable: 9459396 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498444 kB' 'Inactive: 2459136 kB' 'Active(anon): 129292 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120376 kB' 'Mapped: 50980 kB' 'Shmem: 10488 kB' 'KReclaimable: 85844 kB' 'Slab: 189512 kB' 'SReclaimable: 85844 kB' 'SUnreclaim: 103668 kB' 'KernelStack: 6728 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.534 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.534 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.535 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.535 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.536 10:06:05 -- setup/common.sh@33 -- # echo 0 00:05:46.536 10:06:05 -- setup/common.sh@33 -- # return 0 00:05:46.536 10:06:05 -- setup/hugepages.sh@99 -- # surp=0 00:05:46.536 10:06:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:46.536 10:06:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:46.536 10:06:05 -- setup/common.sh@18 -- # local node= 00:05:46.536 10:06:05 -- setup/common.sh@19 -- # local var val 00:05:46.536 10:06:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.536 10:06:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.536 10:06:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.536 10:06:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.536 10:06:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.536 10:06:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6825084 kB' 'MemAvailable: 9459144 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498152 kB' 'Inactive: 2459136 kB' 'Active(anon): 129000 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120124 kB' 'Mapped: 50844 kB' 'Shmem: 10488 kB' 'KReclaimable: 85844 kB' 'Slab: 189560 kB' 'SReclaimable: 85844 kB' 'SUnreclaim: 103716 kB' 'KernelStack: 6712 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.536 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.536 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:46.537 10:06:05 -- setup/common.sh@33 -- # echo 0 00:05:46.537 10:06:05 -- setup/common.sh@33 -- # return 0 00:05:46.537 10:06:05 -- setup/hugepages.sh@100 -- # resv=0 00:05:46.537 nr_hugepages=1024 00:05:46.537 10:06:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:46.537 resv_hugepages=0 00:05:46.537 10:06:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:46.537 surplus_hugepages=0 00:05:46.537 10:06:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:46.537 anon_hugepages=0 00:05:46.537 10:06:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:46.537 10:06:05 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.537 10:06:05 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:46.537 10:06:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:46.537 10:06:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:46.537 10:06:05 -- setup/common.sh@18 -- # local node= 00:05:46.537 10:06:05 -- setup/common.sh@19 -- # local var val 00:05:46.537 10:06:05 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.537 10:06:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.537 10:06:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.537 10:06:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.537 10:06:05 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.537 10:06:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6825084 kB' 'MemAvailable: 9459144 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498032 kB' 'Inactive: 2459136 kB' 'Active(anon): 128880 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120232 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85844 kB' 'Slab: 189600 kB' 'SReclaimable: 85844 kB' 'SUnreclaim: 103756 kB' 'KernelStack: 6720 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.537 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.537 10:06:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.538 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.538 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:05 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:46.539 10:06:05 -- setup/common.sh@33 -- # echo 1024 00:05:46.539 10:06:05 -- setup/common.sh@33 -- # return 0 00:05:46.539 10:06:05 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:46.539 10:06:05 -- setup/hugepages.sh@112 -- # get_nodes 00:05:46.539 10:06:05 -- setup/hugepages.sh@27 -- # local node 00:05:46.539 10:06:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:46.539 10:06:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:46.539 10:06:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:46.539 10:06:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:46.539 10:06:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:46.539 10:06:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:46.539 10:06:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:46.539 10:06:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:46.539 10:06:06 -- setup/common.sh@18 -- # local node=0 00:05:46.539 10:06:06 -- setup/common.sh@19 -- # local var val 00:05:46.539 10:06:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.539 10:06:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.539 10:06:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:46.539 10:06:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:46.539 10:06:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.539 10:06:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6825084 kB' 'MemUsed: 5414020 kB' 'SwapCached: 0 kB' 'Active: 498216 kB' 'Inactive: 2459136 kB' 'Active(anon): 129064 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2838776 kB' 'Mapped: 50792 kB' 'AnonPages: 120148 kB' 'Shmem: 10488 kB' 'KernelStack: 6704 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85844 kB' 'Slab: 189596 kB' 'SReclaimable: 85844 kB' 'SUnreclaim: 103752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.539 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.539 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.540 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.540 10:06:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:46.540 10:06:06 -- setup/common.sh@33 -- # echo 0 00:05:46.540 10:06:06 -- setup/common.sh@33 -- # return 0 00:05:46.540 10:06:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:46.540 10:06:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:46.540 10:06:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:46.540 10:06:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:46.540 node0=1024 expecting 1024 00:05:46.540 10:06:06 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:46.540 10:06:06 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:46.540 10:06:06 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:46.540 10:06:06 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:46.540 10:06:06 -- setup/hugepages.sh@202 -- # setup output 00:05:46.540 10:06:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.540 10:06:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.799 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.799 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.799 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.799 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:46.799 10:06:06 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:46.799 10:06:06 -- setup/hugepages.sh@89 -- # local node 00:05:46.799 10:06:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:46.799 10:06:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:46.799 10:06:06 -- setup/hugepages.sh@92 -- # local surp 00:05:46.799 10:06:06 -- setup/hugepages.sh@93 -- # local resv 00:05:46.799 10:06:06 -- setup/hugepages.sh@94 -- # local anon 00:05:46.799 10:06:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:46.799 10:06:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:46.799 10:06:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:46.799 10:06:06 -- setup/common.sh@18 -- # local node= 00:05:46.799 10:06:06 -- setup/common.sh@19 -- # local var val 00:05:46.799 10:06:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:46.799 10:06:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:46.799 10:06:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:46.799 10:06:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:46.799 10:06:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:46.799 10:06:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:46.799 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.799 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.799 10:06:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6820552 kB' 'MemAvailable: 9454612 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498620 kB' 'Inactive: 2459136 kB' 'Active(anon): 129468 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120612 kB' 'Mapped: 50916 kB' 'Shmem: 10488 kB' 'KReclaimable: 85844 kB' 'Slab: 189652 kB' 'SReclaimable: 85844 kB' 'SUnreclaim: 103808 kB' 'KernelStack: 6792 kB' 'PageTables: 4836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:46.799 10:06:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.799 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.799 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.799 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.799 10:06:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.799 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.799 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.799 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.799 10:06:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.799 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.799 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.799 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.799 10:06:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.799 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.799 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.799 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:46.799 10:06:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:46.799 10:06:06 -- setup/common.sh@32 -- # continue 00:05:46.799 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:46.799 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.062 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.062 10:06:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:47.062 10:06:06 -- setup/common.sh@33 -- # echo 0 00:05:47.062 10:06:06 -- setup/common.sh@33 -- # return 0 00:05:47.062 10:06:06 -- setup/hugepages.sh@97 -- # anon=0 00:05:47.062 10:06:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:47.062 10:06:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.062 10:06:06 -- setup/common.sh@18 -- # local node= 00:05:47.062 10:06:06 -- setup/common.sh@19 -- # local var val 00:05:47.062 10:06:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.063 10:06:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.063 10:06:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.063 10:06:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.063 10:06:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.063 10:06:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6820552 kB' 'MemAvailable: 9454612 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498248 kB' 'Inactive: 2459136 kB' 'Active(anon): 129096 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120252 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85844 kB' 'Slab: 189668 kB' 'SReclaimable: 85844 kB' 'SUnreclaim: 103824 kB' 'KernelStack: 6736 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.063 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.063 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.064 10:06:06 -- setup/common.sh@33 -- # echo 0 00:05:47.064 10:06:06 -- setup/common.sh@33 -- # return 0 00:05:47.064 10:06:06 -- setup/hugepages.sh@99 -- # surp=0 00:05:47.064 10:06:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:47.064 10:06:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:47.064 10:06:06 -- setup/common.sh@18 -- # local node= 00:05:47.064 10:06:06 -- setup/common.sh@19 -- # local var val 00:05:47.064 10:06:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.064 10:06:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.064 10:06:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.064 10:06:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.064 10:06:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.064 10:06:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6820804 kB' 'MemAvailable: 9454864 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498132 kB' 'Inactive: 2459136 kB' 'Active(anon): 128980 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120084 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85844 kB' 'Slab: 189624 kB' 'SReclaimable: 85844 kB' 'SUnreclaim: 103780 kB' 'KernelStack: 6688 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.064 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.064 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.065 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.065 10:06:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:47.065 10:06:06 -- setup/common.sh@33 -- # echo 0 00:05:47.065 10:06:06 -- setup/common.sh@33 -- # return 0 00:05:47.065 10:06:06 -- setup/hugepages.sh@100 -- # resv=0 00:05:47.065 10:06:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:47.065 nr_hugepages=1024 00:05:47.065 resv_hugepages=0 00:05:47.065 10:06:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:47.065 surplus_hugepages=0 00:05:47.065 10:06:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:47.065 anon_hugepages=0 00:05:47.065 10:06:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:47.065 10:06:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:47.065 10:06:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:47.065 10:06:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:47.065 10:06:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:47.065 10:06:06 -- setup/common.sh@18 -- # local node= 00:05:47.065 10:06:06 -- setup/common.sh@19 -- # local var val 00:05:47.065 10:06:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.066 10:06:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.066 10:06:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:47.066 10:06:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:47.066 10:06:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.066 10:06:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6820804 kB' 'MemAvailable: 9454864 kB' 'Buffers: 2684 kB' 'Cached: 2836092 kB' 'SwapCached: 0 kB' 'Active: 498180 kB' 'Inactive: 2459136 kB' 'Active(anon): 129028 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120180 kB' 'Mapped: 50792 kB' 'Shmem: 10488 kB' 'KReclaimable: 85844 kB' 'Slab: 189624 kB' 'SReclaimable: 85844 kB' 'SUnreclaim: 103780 kB' 'KernelStack: 6704 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459580 kB' 'Committed_AS: 322544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6432 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 192364 kB' 'DirectMap2M: 6098944 kB' 'DirectMap1G: 8388608 kB' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.066 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.066 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:47.067 10:06:06 -- setup/common.sh@33 -- # echo 1024 00:05:47.067 10:06:06 -- setup/common.sh@33 -- # return 0 00:05:47.067 10:06:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:47.067 10:06:06 -- setup/hugepages.sh@112 -- # get_nodes 00:05:47.067 10:06:06 -- setup/hugepages.sh@27 -- # local node 00:05:47.067 10:06:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:47.067 10:06:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:47.067 10:06:06 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:47.067 10:06:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:47.067 10:06:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:47.067 10:06:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:47.067 10:06:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:47.067 10:06:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:47.067 10:06:06 -- setup/common.sh@18 -- # local node=0 00:05:47.067 10:06:06 -- setup/common.sh@19 -- # local var val 00:05:47.067 10:06:06 -- setup/common.sh@20 -- # local mem_f mem 00:05:47.067 10:06:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:47.067 10:06:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:47.067 10:06:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:47.067 10:06:06 -- setup/common.sh@28 -- # mapfile -t mem 00:05:47.067 10:06:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239104 kB' 'MemFree: 6820804 kB' 'MemUsed: 5418300 kB' 'SwapCached: 0 kB' 'Active: 498152 kB' 'Inactive: 2459136 kB' 'Active(anon): 129000 kB' 'Inactive(anon): 0 kB' 'Active(file): 369152 kB' 'Inactive(file): 2459136 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2838776 kB' 'Mapped: 50792 kB' 'AnonPages: 120108 kB' 'Shmem: 10488 kB' 'KernelStack: 6688 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85844 kB' 'Slab: 189628 kB' 'SReclaimable: 85844 kB' 'SUnreclaim: 103784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.067 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.067 10:06:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # continue 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # IFS=': ' 00:05:47.068 10:06:06 -- setup/common.sh@31 -- # read -r var val _ 00:05:47.068 10:06:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:47.068 10:06:06 -- setup/common.sh@33 -- # echo 0 00:05:47.068 10:06:06 -- setup/common.sh@33 -- # return 0 00:05:47.068 10:06:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:47.068 10:06:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:47.068 10:06:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:47.068 10:06:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:47.068 node0=1024 expecting 1024 00:05:47.068 10:06:06 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:47.068 10:06:06 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:47.068 00:05:47.068 real 0m0.982s 00:05:47.068 user 0m0.466s 00:05:47.068 sys 0m0.541s 00:05:47.068 10:06:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.068 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:05:47.068 ************************************ 00:05:47.068 END TEST no_shrink_alloc 00:05:47.068 ************************************ 00:05:47.068 10:06:06 -- setup/hugepages.sh@217 -- # clear_hp 00:05:47.068 10:06:06 -- setup/hugepages.sh@37 -- # local node hp 00:05:47.068 10:06:06 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:47.068 10:06:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:47.068 10:06:06 -- setup/hugepages.sh@41 -- # echo 0 00:05:47.068 10:06:06 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:47.068 10:06:06 -- setup/hugepages.sh@41 -- # echo 0 00:05:47.068 10:06:06 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:47.068 10:06:06 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:47.068 00:05:47.068 real 0m4.484s 00:05:47.068 user 0m2.162s 00:05:47.068 sys 0m2.374s 00:05:47.068 10:06:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:47.068 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:05:47.068 ************************************ 00:05:47.068 END TEST hugepages 00:05:47.068 ************************************ 00:05:47.068 10:06:06 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:47.068 10:06:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.068 10:06:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.068 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:05:47.068 ************************************ 00:05:47.068 START TEST driver 00:05:47.068 ************************************ 00:05:47.068 10:06:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:47.327 * Looking for test storage... 00:05:47.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:47.327 10:06:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:47.327 10:06:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:47.327 10:06:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:47.327 10:06:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:47.327 10:06:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:47.327 10:06:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:47.327 10:06:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:47.327 10:06:06 -- scripts/common.sh@335 -- # IFS=.-: 00:05:47.327 10:06:06 -- scripts/common.sh@335 -- # read -ra ver1 00:05:47.327 10:06:06 -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.327 10:06:06 -- scripts/common.sh@336 -- # read -ra ver2 00:05:47.327 10:06:06 -- scripts/common.sh@337 -- # local 'op=<' 00:05:47.327 10:06:06 -- scripts/common.sh@339 -- # ver1_l=2 00:05:47.327 10:06:06 -- scripts/common.sh@340 -- # ver2_l=1 00:05:47.327 10:06:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:47.327 10:06:06 -- scripts/common.sh@343 -- # case "$op" in 00:05:47.327 10:06:06 -- scripts/common.sh@344 -- # : 1 00:05:47.327 10:06:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:47.327 10:06:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.327 10:06:06 -- scripts/common.sh@364 -- # decimal 1 00:05:47.327 10:06:06 -- scripts/common.sh@352 -- # local d=1 00:05:47.327 10:06:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.327 10:06:06 -- scripts/common.sh@354 -- # echo 1 00:05:47.327 10:06:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:47.327 10:06:06 -- scripts/common.sh@365 -- # decimal 2 00:05:47.327 10:06:06 -- scripts/common.sh@352 -- # local d=2 00:05:47.327 10:06:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.327 10:06:06 -- scripts/common.sh@354 -- # echo 2 00:05:47.327 10:06:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:47.327 10:06:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:47.327 10:06:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:47.327 10:06:06 -- scripts/common.sh@367 -- # return 0 00:05:47.327 10:06:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.327 10:06:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:47.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.327 --rc genhtml_branch_coverage=1 00:05:47.327 --rc genhtml_function_coverage=1 00:05:47.327 --rc genhtml_legend=1 00:05:47.327 --rc geninfo_all_blocks=1 00:05:47.327 --rc geninfo_unexecuted_blocks=1 00:05:47.327 00:05:47.327 ' 00:05:47.327 10:06:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:47.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.327 --rc genhtml_branch_coverage=1 00:05:47.327 --rc genhtml_function_coverage=1 00:05:47.327 --rc genhtml_legend=1 00:05:47.327 --rc geninfo_all_blocks=1 00:05:47.327 --rc geninfo_unexecuted_blocks=1 00:05:47.327 00:05:47.327 ' 00:05:47.327 10:06:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:47.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.327 --rc genhtml_branch_coverage=1 00:05:47.327 --rc genhtml_function_coverage=1 00:05:47.327 --rc genhtml_legend=1 00:05:47.327 --rc geninfo_all_blocks=1 00:05:47.327 --rc geninfo_unexecuted_blocks=1 00:05:47.327 00:05:47.327 ' 00:05:47.327 10:06:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:47.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.327 --rc genhtml_branch_coverage=1 00:05:47.327 --rc genhtml_function_coverage=1 00:05:47.327 --rc genhtml_legend=1 00:05:47.327 --rc geninfo_all_blocks=1 00:05:47.327 --rc geninfo_unexecuted_blocks=1 00:05:47.327 00:05:47.327 ' 00:05:47.327 10:06:06 -- setup/driver.sh@68 -- # setup reset 00:05:47.327 10:06:06 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:47.327 10:06:06 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:47.894 10:06:07 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:47.894 10:06:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.894 10:06:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.894 10:06:07 -- common/autotest_common.sh@10 -- # set +x 00:05:47.894 ************************************ 00:05:47.894 START TEST guess_driver 00:05:47.894 ************************************ 00:05:47.894 10:06:07 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:47.894 10:06:07 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:47.894 10:06:07 -- setup/driver.sh@47 -- # local fail=0 00:05:47.894 10:06:07 -- setup/driver.sh@49 -- # pick_driver 00:05:47.894 10:06:07 -- setup/driver.sh@36 -- # vfio 00:05:47.894 10:06:07 -- setup/driver.sh@21 -- # local iommu_grups 00:05:47.894 10:06:07 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:47.894 10:06:07 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:47.894 10:06:07 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:47.894 10:06:07 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:47.894 10:06:07 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:47.894 10:06:07 -- setup/driver.sh@32 -- # return 1 00:05:47.894 10:06:07 -- setup/driver.sh@38 -- # uio 00:05:47.894 10:06:07 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:47.894 10:06:07 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:47.894 10:06:07 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:47.894 10:06:07 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:47.894 10:06:07 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:47.894 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:47.894 10:06:07 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:47.894 10:06:07 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:47.894 10:06:07 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:47.894 Looking for driver=uio_pci_generic 00:05:47.894 10:06:07 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:47.894 10:06:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:47.894 10:06:07 -- setup/driver.sh@45 -- # setup output config 00:05:47.894 10:06:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.894 10:06:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:48.461 10:06:07 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:48.461 10:06:07 -- setup/driver.sh@58 -- # continue 00:05:48.461 10:06:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.719 10:06:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.719 10:06:08 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:48.719 10:06:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.719 10:06:08 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:48.719 10:06:08 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:48.719 10:06:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:48.719 10:06:08 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:48.719 10:06:08 -- setup/driver.sh@65 -- # setup reset 00:05:48.719 10:06:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:48.719 10:06:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:49.286 00:05:49.286 real 0m1.355s 00:05:49.286 user 0m0.479s 00:05:49.286 sys 0m0.824s 00:05:49.286 10:06:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.286 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:05:49.286 ************************************ 00:05:49.286 END TEST guess_driver 00:05:49.286 ************************************ 00:05:49.286 00:05:49.286 real 0m2.100s 00:05:49.286 user 0m0.808s 00:05:49.286 sys 0m1.294s 00:05:49.286 10:06:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.286 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:05:49.286 ************************************ 00:05:49.286 END TEST driver 00:05:49.286 ************************************ 00:05:49.286 10:06:08 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:49.286 10:06:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.286 10:06:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.286 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:05:49.286 ************************************ 00:05:49.286 START TEST devices 00:05:49.286 ************************************ 00:05:49.286 10:06:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:49.286 * Looking for test storage... 00:05:49.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:49.286 10:06:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:49.286 10:06:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:49.286 10:06:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:49.545 10:06:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:49.545 10:06:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:49.545 10:06:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:49.545 10:06:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:49.545 10:06:08 -- scripts/common.sh@335 -- # IFS=.-: 00:05:49.545 10:06:08 -- scripts/common.sh@335 -- # read -ra ver1 00:05:49.545 10:06:08 -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.545 10:06:08 -- scripts/common.sh@336 -- # read -ra ver2 00:05:49.545 10:06:08 -- scripts/common.sh@337 -- # local 'op=<' 00:05:49.545 10:06:08 -- scripts/common.sh@339 -- # ver1_l=2 00:05:49.545 10:06:08 -- scripts/common.sh@340 -- # ver2_l=1 00:05:49.545 10:06:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:49.545 10:06:08 -- scripts/common.sh@343 -- # case "$op" in 00:05:49.545 10:06:08 -- scripts/common.sh@344 -- # : 1 00:05:49.545 10:06:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:49.545 10:06:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.545 10:06:08 -- scripts/common.sh@364 -- # decimal 1 00:05:49.545 10:06:08 -- scripts/common.sh@352 -- # local d=1 00:05:49.545 10:06:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.545 10:06:08 -- scripts/common.sh@354 -- # echo 1 00:05:49.545 10:06:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:49.545 10:06:08 -- scripts/common.sh@365 -- # decimal 2 00:05:49.545 10:06:08 -- scripts/common.sh@352 -- # local d=2 00:05:49.545 10:06:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.545 10:06:08 -- scripts/common.sh@354 -- # echo 2 00:05:49.545 10:06:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:49.545 10:06:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:49.545 10:06:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:49.545 10:06:08 -- scripts/common.sh@367 -- # return 0 00:05:49.546 10:06:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.546 10:06:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:49.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.546 --rc genhtml_branch_coverage=1 00:05:49.546 --rc genhtml_function_coverage=1 00:05:49.546 --rc genhtml_legend=1 00:05:49.546 --rc geninfo_all_blocks=1 00:05:49.546 --rc geninfo_unexecuted_blocks=1 00:05:49.546 00:05:49.546 ' 00:05:49.546 10:06:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:49.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.546 --rc genhtml_branch_coverage=1 00:05:49.546 --rc genhtml_function_coverage=1 00:05:49.546 --rc genhtml_legend=1 00:05:49.546 --rc geninfo_all_blocks=1 00:05:49.546 --rc geninfo_unexecuted_blocks=1 00:05:49.546 00:05:49.546 ' 00:05:49.546 10:06:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:49.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.546 --rc genhtml_branch_coverage=1 00:05:49.546 --rc genhtml_function_coverage=1 00:05:49.546 --rc genhtml_legend=1 00:05:49.546 --rc geninfo_all_blocks=1 00:05:49.546 --rc geninfo_unexecuted_blocks=1 00:05:49.546 00:05:49.546 ' 00:05:49.546 10:06:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:49.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.546 --rc genhtml_branch_coverage=1 00:05:49.546 --rc genhtml_function_coverage=1 00:05:49.546 --rc genhtml_legend=1 00:05:49.546 --rc geninfo_all_blocks=1 00:05:49.546 --rc geninfo_unexecuted_blocks=1 00:05:49.546 00:05:49.546 ' 00:05:49.546 10:06:08 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:49.546 10:06:08 -- setup/devices.sh@192 -- # setup reset 00:05:49.546 10:06:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:49.546 10:06:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:50.114 10:06:09 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:50.114 10:06:09 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:50.114 10:06:09 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:50.114 10:06:09 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:50.114 10:06:09 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:50.114 10:06:09 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:50.114 10:06:09 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:50.114 10:06:09 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:50.114 10:06:09 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:50.114 10:06:09 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:50.114 10:06:09 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:50.114 10:06:09 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:50.114 10:06:09 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:50.114 10:06:09 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:50.114 10:06:09 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:50.114 10:06:09 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:50.114 10:06:09 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:50.114 10:06:09 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:50.114 10:06:09 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:50.114 10:06:09 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:50.114 10:06:09 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:50.114 10:06:09 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:50.114 10:06:09 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:50.114 10:06:09 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:50.114 10:06:09 -- setup/devices.sh@196 -- # blocks=() 00:05:50.114 10:06:09 -- setup/devices.sh@196 -- # declare -a blocks 00:05:50.114 10:06:09 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:50.114 10:06:09 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:50.114 10:06:09 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:50.114 10:06:09 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:50.114 10:06:09 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:50.115 10:06:09 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:50.115 10:06:09 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:50.115 10:06:09 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:50.115 10:06:09 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:50.115 10:06:09 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:50.115 10:06:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:50.115 No valid GPT data, bailing 00:05:50.115 10:06:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:50.115 10:06:09 -- scripts/common.sh@393 -- # pt= 00:05:50.115 10:06:09 -- scripts/common.sh@394 -- # return 1 00:05:50.115 10:06:09 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:50.115 10:06:09 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:50.115 10:06:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:50.115 10:06:09 -- setup/common.sh@80 -- # echo 5368709120 00:05:50.115 10:06:09 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:50.115 10:06:09 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:50.115 10:06:09 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:50.115 10:06:09 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:50.115 10:06:09 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:50.115 10:06:09 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:50.115 10:06:09 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:50.115 10:06:09 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:50.115 10:06:09 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:50.115 10:06:09 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:50.115 10:06:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:50.374 No valid GPT data, bailing 00:05:50.374 10:06:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:50.374 10:06:09 -- scripts/common.sh@393 -- # pt= 00:05:50.374 10:06:09 -- scripts/common.sh@394 -- # return 1 00:05:50.374 10:06:09 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:50.374 10:06:09 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:50.374 10:06:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:50.374 10:06:09 -- setup/common.sh@80 -- # echo 4294967296 00:05:50.374 10:06:09 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:50.374 10:06:09 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:50.374 10:06:09 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:50.374 10:06:09 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:50.374 10:06:09 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:50.374 10:06:09 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:50.374 10:06:09 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:50.374 10:06:09 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:50.374 10:06:09 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:50.374 10:06:09 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:50.374 10:06:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:50.374 No valid GPT data, bailing 00:05:50.374 10:06:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:50.374 10:06:09 -- scripts/common.sh@393 -- # pt= 00:05:50.374 10:06:09 -- scripts/common.sh@394 -- # return 1 00:05:50.374 10:06:09 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:50.374 10:06:09 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:50.374 10:06:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:50.374 10:06:09 -- setup/common.sh@80 -- # echo 4294967296 00:05:50.374 10:06:09 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:50.374 10:06:09 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:50.374 10:06:09 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:50.374 10:06:09 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:50.374 10:06:09 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:50.374 10:06:09 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:50.374 10:06:09 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:50.374 10:06:09 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:50.374 10:06:09 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:50.374 10:06:09 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:50.374 10:06:09 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:50.374 No valid GPT data, bailing 00:05:50.374 10:06:09 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:50.374 10:06:09 -- scripts/common.sh@393 -- # pt= 00:05:50.374 10:06:09 -- scripts/common.sh@394 -- # return 1 00:05:50.374 10:06:09 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:50.374 10:06:09 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:50.374 10:06:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:50.374 10:06:09 -- setup/common.sh@80 -- # echo 4294967296 00:05:50.374 10:06:09 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:50.374 10:06:09 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:50.374 10:06:09 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:50.374 10:06:09 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:50.374 10:06:09 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:50.374 10:06:09 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:50.374 10:06:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.374 10:06:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.374 10:06:09 -- common/autotest_common.sh@10 -- # set +x 00:05:50.374 ************************************ 00:05:50.374 START TEST nvme_mount 00:05:50.374 ************************************ 00:05:50.374 10:06:09 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:50.374 10:06:09 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:50.374 10:06:09 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:50.375 10:06:09 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.375 10:06:09 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:50.375 10:06:09 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:50.375 10:06:09 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:50.375 10:06:09 -- setup/common.sh@40 -- # local part_no=1 00:05:50.375 10:06:09 -- setup/common.sh@41 -- # local size=1073741824 00:05:50.375 10:06:09 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:50.375 10:06:09 -- setup/common.sh@44 -- # parts=() 00:05:50.375 10:06:09 -- setup/common.sh@44 -- # local parts 00:05:50.375 10:06:09 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:50.375 10:06:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:50.375 10:06:09 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:50.375 10:06:09 -- setup/common.sh@46 -- # (( part++ )) 00:05:50.375 10:06:09 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:50.375 10:06:09 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:50.375 10:06:09 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:50.375 10:06:09 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:51.750 Creating new GPT entries in memory. 00:05:51.750 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:51.750 other utilities. 00:05:51.750 10:06:10 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:51.750 10:06:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:51.750 10:06:10 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:51.750 10:06:10 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:51.750 10:06:10 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:52.686 Creating new GPT entries in memory. 00:05:52.686 The operation has completed successfully. 00:05:52.686 10:06:11 -- setup/common.sh@57 -- # (( part++ )) 00:05:52.686 10:06:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:52.686 10:06:11 -- setup/common.sh@62 -- # wait 65636 00:05:52.686 10:06:11 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.686 10:06:11 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:52.686 10:06:11 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.686 10:06:11 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:52.686 10:06:11 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:52.686 10:06:11 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.686 10:06:12 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:52.686 10:06:12 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:52.686 10:06:12 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:52.686 10:06:12 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.686 10:06:12 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:52.686 10:06:12 -- setup/devices.sh@53 -- # local found=0 00:05:52.686 10:06:12 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:52.686 10:06:12 -- setup/devices.sh@56 -- # : 00:05:52.686 10:06:12 -- setup/devices.sh@59 -- # local pci status 00:05:52.686 10:06:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.686 10:06:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:52.686 10:06:12 -- setup/devices.sh@47 -- # setup output config 00:05:52.686 10:06:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.686 10:06:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:52.686 10:06:12 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:52.686 10:06:12 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:52.686 10:06:12 -- setup/devices.sh@63 -- # found=1 00:05:52.686 10:06:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.686 10:06:12 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:52.686 10:06:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.945 10:06:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:52.945 10:06:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.203 10:06:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.204 10:06:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.204 10:06:12 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:53.204 10:06:12 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:53.204 10:06:12 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.204 10:06:12 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:53.204 10:06:12 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:53.204 10:06:12 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:53.204 10:06:12 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.204 10:06:12 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.204 10:06:12 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:53.204 10:06:12 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:53.204 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:53.204 10:06:12 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:53.204 10:06:12 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:53.462 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:53.462 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:53.463 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:53.463 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:53.463 10:06:12 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:53.463 10:06:12 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:53.463 10:06:12 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.463 10:06:12 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:53.463 10:06:12 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:53.463 10:06:12 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.463 10:06:12 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:53.463 10:06:12 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:53.463 10:06:12 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:53.463 10:06:12 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.463 10:06:12 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:53.463 10:06:12 -- setup/devices.sh@53 -- # local found=0 00:05:53.463 10:06:12 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:53.463 10:06:12 -- setup/devices.sh@56 -- # : 00:05:53.463 10:06:12 -- setup/devices.sh@59 -- # local pci status 00:05:53.463 10:06:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.463 10:06:12 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:53.463 10:06:12 -- setup/devices.sh@47 -- # setup output config 00:05:53.463 10:06:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.463 10:06:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:53.722 10:06:13 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.722 10:06:13 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:53.722 10:06:13 -- setup/devices.sh@63 -- # found=1 00:05:53.722 10:06:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.722 10:06:13 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.722 10:06:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.981 10:06:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.981 10:06:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.981 10:06:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:53.981 10:06:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.240 10:06:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:54.240 10:06:13 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:54.240 10:06:13 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:54.240 10:06:13 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:54.240 10:06:13 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:54.240 10:06:13 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:54.240 10:06:13 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:54.240 10:06:13 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:54.240 10:06:13 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:54.240 10:06:13 -- setup/devices.sh@50 -- # local mount_point= 00:05:54.240 10:06:13 -- setup/devices.sh@51 -- # local test_file= 00:05:54.240 10:06:13 -- setup/devices.sh@53 -- # local found=0 00:05:54.240 10:06:13 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:54.240 10:06:13 -- setup/devices.sh@59 -- # local pci status 00:05:54.240 10:06:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.240 10:06:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:54.240 10:06:13 -- setup/devices.sh@47 -- # setup output config 00:05:54.240 10:06:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:54.240 10:06:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:54.240 10:06:13 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:54.240 10:06:13 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:54.240 10:06:13 -- setup/devices.sh@63 -- # found=1 00:05:54.240 10:06:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.240 10:06:13 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:54.240 10:06:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.807 10:06:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:54.807 10:06:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.807 10:06:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:54.807 10:06:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:54.807 10:06:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:54.807 10:06:14 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:54.807 10:06:14 -- setup/devices.sh@68 -- # return 0 00:05:54.807 10:06:14 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:54.807 10:06:14 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:54.807 10:06:14 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:54.807 10:06:14 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:54.807 10:06:14 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:54.807 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:54.807 00:05:54.807 real 0m4.330s 00:05:54.807 user 0m0.998s 00:05:54.807 sys 0m1.045s 00:05:54.807 10:06:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.807 10:06:14 -- common/autotest_common.sh@10 -- # set +x 00:05:54.807 ************************************ 00:05:54.807 END TEST nvme_mount 00:05:54.807 ************************************ 00:05:54.807 10:06:14 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:54.807 10:06:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.807 10:06:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.807 10:06:14 -- common/autotest_common.sh@10 -- # set +x 00:05:54.807 ************************************ 00:05:54.807 START TEST dm_mount 00:05:54.807 ************************************ 00:05:54.807 10:06:14 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:54.807 10:06:14 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:54.807 10:06:14 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:54.807 10:06:14 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:54.807 10:06:14 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:54.807 10:06:14 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:54.807 10:06:14 -- setup/common.sh@40 -- # local part_no=2 00:05:54.807 10:06:14 -- setup/common.sh@41 -- # local size=1073741824 00:05:54.807 10:06:14 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:54.807 10:06:14 -- setup/common.sh@44 -- # parts=() 00:05:54.807 10:06:14 -- setup/common.sh@44 -- # local parts 00:05:54.807 10:06:14 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:54.807 10:06:14 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:54.807 10:06:14 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:54.807 10:06:14 -- setup/common.sh@46 -- # (( part++ )) 00:05:54.807 10:06:14 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:54.807 10:06:14 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:54.807 10:06:14 -- setup/common.sh@46 -- # (( part++ )) 00:05:54.807 10:06:14 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:54.807 10:06:14 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:54.807 10:06:14 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:54.807 10:06:14 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:56.185 Creating new GPT entries in memory. 00:05:56.185 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:56.185 other utilities. 00:05:56.185 10:06:15 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:56.185 10:06:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:56.185 10:06:15 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:56.185 10:06:15 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:56.185 10:06:15 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:57.123 Creating new GPT entries in memory. 00:05:57.123 The operation has completed successfully. 00:05:57.123 10:06:16 -- setup/common.sh@57 -- # (( part++ )) 00:05:57.123 10:06:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:57.123 10:06:16 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:57.123 10:06:16 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:57.123 10:06:16 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:58.057 The operation has completed successfully. 00:05:58.057 10:06:17 -- setup/common.sh@57 -- # (( part++ )) 00:05:58.057 10:06:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:58.057 10:06:17 -- setup/common.sh@62 -- # wait 66090 00:05:58.057 10:06:17 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:58.057 10:06:17 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:58.057 10:06:17 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:58.057 10:06:17 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:58.057 10:06:17 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:58.057 10:06:17 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:58.057 10:06:17 -- setup/devices.sh@161 -- # break 00:05:58.057 10:06:17 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:58.057 10:06:17 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:58.057 10:06:17 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:58.057 10:06:17 -- setup/devices.sh@166 -- # dm=dm-0 00:05:58.057 10:06:17 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:58.057 10:06:17 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:58.057 10:06:17 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:58.057 10:06:17 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:58.057 10:06:17 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:58.057 10:06:17 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:58.057 10:06:17 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:58.057 10:06:17 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:58.057 10:06:17 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:58.057 10:06:17 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:58.057 10:06:17 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:58.057 10:06:17 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:58.057 10:06:17 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:58.057 10:06:17 -- setup/devices.sh@53 -- # local found=0 00:05:58.057 10:06:17 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:58.057 10:06:17 -- setup/devices.sh@56 -- # : 00:05:58.057 10:06:17 -- setup/devices.sh@59 -- # local pci status 00:05:58.057 10:06:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.057 10:06:17 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:58.057 10:06:17 -- setup/devices.sh@47 -- # setup output config 00:05:58.057 10:06:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:58.057 10:06:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:58.315 10:06:17 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:58.315 10:06:17 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:58.315 10:06:17 -- setup/devices.sh@63 -- # found=1 00:05:58.315 10:06:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.315 10:06:17 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:58.315 10:06:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.574 10:06:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:58.574 10:06:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.574 10:06:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:58.574 10:06:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.574 10:06:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:58.574 10:06:18 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:58.574 10:06:18 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:58.574 10:06:18 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:58.574 10:06:18 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:58.574 10:06:18 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:58.574 10:06:18 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:58.574 10:06:18 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:58.574 10:06:18 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:58.574 10:06:18 -- setup/devices.sh@50 -- # local mount_point= 00:05:58.574 10:06:18 -- setup/devices.sh@51 -- # local test_file= 00:05:58.574 10:06:18 -- setup/devices.sh@53 -- # local found=0 00:05:58.574 10:06:18 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:58.574 10:06:18 -- setup/devices.sh@59 -- # local pci status 00:05:58.574 10:06:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.574 10:06:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:58.574 10:06:18 -- setup/devices.sh@47 -- # setup output config 00:05:58.574 10:06:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:58.574 10:06:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:58.833 10:06:18 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:58.833 10:06:18 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:58.833 10:06:18 -- setup/devices.sh@63 -- # found=1 00:05:58.833 10:06:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.833 10:06:18 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:58.833 10:06:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.091 10:06:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:59.091 10:06:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.350 10:06:18 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:59.350 10:06:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:59.350 10:06:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:59.350 10:06:18 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:59.350 10:06:18 -- setup/devices.sh@68 -- # return 0 00:05:59.350 10:06:18 -- setup/devices.sh@187 -- # cleanup_dm 00:05:59.350 10:06:18 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:59.350 10:06:18 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:59.350 10:06:18 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:59.350 10:06:18 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:59.350 10:06:18 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:59.350 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:59.350 10:06:18 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:59.350 10:06:18 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:59.350 00:05:59.350 real 0m4.528s 00:05:59.350 user 0m0.663s 00:05:59.350 sys 0m0.784s 00:05:59.350 10:06:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.350 ************************************ 00:05:59.350 10:06:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.350 END TEST dm_mount 00:05:59.350 ************************************ 00:05:59.350 10:06:18 -- setup/devices.sh@1 -- # cleanup 00:05:59.350 10:06:18 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:59.350 10:06:18 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:59.350 10:06:18 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:59.350 10:06:18 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:59.350 10:06:18 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:59.350 10:06:18 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:59.609 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:59.609 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:59.609 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:59.609 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:59.609 10:06:19 -- setup/devices.sh@12 -- # cleanup_dm 00:05:59.609 10:06:19 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:59.609 10:06:19 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:59.609 10:06:19 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:59.609 10:06:19 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:59.609 10:06:19 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:59.609 10:06:19 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:59.609 00:05:59.609 real 0m10.379s 00:05:59.609 user 0m2.394s 00:05:59.609 sys 0m2.359s 00:05:59.609 10:06:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.609 ************************************ 00:05:59.609 END TEST devices 00:05:59.609 ************************************ 00:05:59.609 10:06:19 -- common/autotest_common.sh@10 -- # set +x 00:05:59.867 00:05:59.867 real 0m21.382s 00:05:59.867 user 0m7.362s 00:05:59.867 sys 0m8.430s 00:05:59.867 ************************************ 00:05:59.867 END TEST setup.sh 00:05:59.867 ************************************ 00:05:59.867 10:06:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.867 10:06:19 -- common/autotest_common.sh@10 -- # set +x 00:05:59.867 10:06:19 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:59.867 Hugepages 00:05:59.867 node hugesize free / total 00:05:59.867 node0 1048576kB 0 / 0 00:05:59.867 node0 2048kB 2048 / 2048 00:05:59.867 00:05:59.867 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:59.867 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:00.125 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:00.125 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:00.125 10:06:19 -- spdk/autotest.sh@128 -- # uname -s 00:06:00.125 10:06:19 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:06:00.125 10:06:19 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:06:00.125 10:06:19 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:00.692 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:00.950 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.950 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.950 10:06:20 -- common/autotest_common.sh@1527 -- # sleep 1 00:06:01.885 10:06:21 -- common/autotest_common.sh@1528 -- # bdfs=() 00:06:01.885 10:06:21 -- common/autotest_common.sh@1528 -- # local bdfs 00:06:01.885 10:06:21 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:06:01.885 10:06:21 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:06:01.885 10:06:21 -- common/autotest_common.sh@1508 -- # bdfs=() 00:06:01.885 10:06:21 -- common/autotest_common.sh@1508 -- # local bdfs 00:06:01.885 10:06:21 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:01.885 10:06:21 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:01.885 10:06:21 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:06:02.143 10:06:21 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:06:02.143 10:06:21 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:06:02.143 10:06:21 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:02.401 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:02.401 Waiting for block devices as requested 00:06:02.401 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:06:02.659 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:06:02.659 10:06:22 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:06:02.659 10:06:22 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:06:02.659 10:06:22 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:02.659 10:06:22 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:06:02.659 10:06:22 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:06:02.659 10:06:22 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:06:02.659 10:06:22 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:06:02.659 10:06:22 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:06:02.659 10:06:22 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:06:02.659 10:06:22 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:06:02.659 10:06:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:02.659 10:06:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:02.659 10:06:22 -- common/autotest_common.sh@1540 -- # grep oacs 00:06:02.659 10:06:22 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:06:02.659 10:06:22 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:06:02.659 10:06:22 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:06:02.659 10:06:22 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:06:02.659 10:06:22 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:06:02.659 10:06:22 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:06:02.659 10:06:22 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:06:02.659 10:06:22 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:06:02.659 10:06:22 -- common/autotest_common.sh@1552 -- # continue 00:06:02.659 10:06:22 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:06:02.659 10:06:22 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:06:02.659 10:06:22 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:02.659 10:06:22 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:06:02.659 10:06:22 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:06:02.659 10:06:22 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:06:02.659 10:06:22 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:06:02.659 10:06:22 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:06:02.659 10:06:22 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:06:02.659 10:06:22 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:06:02.659 10:06:22 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:02.659 10:06:22 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:02.659 10:06:22 -- common/autotest_common.sh@1540 -- # grep oacs 00:06:02.659 10:06:22 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:06:02.659 10:06:22 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:06:02.659 10:06:22 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:06:02.659 10:06:22 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:06:02.659 10:06:22 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:06:02.659 10:06:22 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:06:02.659 10:06:22 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:06:02.659 10:06:22 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:06:02.659 10:06:22 -- common/autotest_common.sh@1552 -- # continue 00:06:02.659 10:06:22 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:06:02.659 10:06:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:02.659 10:06:22 -- common/autotest_common.sh@10 -- # set +x 00:06:02.660 10:06:22 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:06:02.660 10:06:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.660 10:06:22 -- common/autotest_common.sh@10 -- # set +x 00:06:02.660 10:06:22 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:03.226 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:03.485 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:03.485 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:06:03.485 10:06:22 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:06:03.485 10:06:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.485 10:06:22 -- common/autotest_common.sh@10 -- # set +x 00:06:03.485 10:06:22 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:06:03.485 10:06:22 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:06:03.485 10:06:22 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:06:03.485 10:06:22 -- common/autotest_common.sh@1572 -- # bdfs=() 00:06:03.485 10:06:22 -- common/autotest_common.sh@1572 -- # local bdfs 00:06:03.485 10:06:22 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:06:03.485 10:06:22 -- common/autotest_common.sh@1508 -- # bdfs=() 00:06:03.485 10:06:22 -- common/autotest_common.sh@1508 -- # local bdfs 00:06:03.485 10:06:22 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:03.485 10:06:22 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:03.485 10:06:22 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:06:03.485 10:06:23 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:06:03.485 10:06:23 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:06:03.745 10:06:23 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:06:03.745 10:06:23 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:06:03.745 10:06:23 -- common/autotest_common.sh@1575 -- # device=0x0010 00:06:03.745 10:06:23 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:03.745 10:06:23 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:06:03.745 10:06:23 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:06:03.745 10:06:23 -- common/autotest_common.sh@1575 -- # device=0x0010 00:06:03.745 10:06:23 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:03.745 10:06:23 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:06:03.745 10:06:23 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:06:03.745 10:06:23 -- common/autotest_common.sh@1588 -- # return 0 00:06:03.745 10:06:23 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:06:03.745 10:06:23 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:06:03.745 10:06:23 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:06:03.745 10:06:23 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:06:03.745 10:06:23 -- spdk/autotest.sh@160 -- # timing_enter lib 00:06:03.745 10:06:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:03.745 10:06:23 -- common/autotest_common.sh@10 -- # set +x 00:06:03.745 10:06:23 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:03.745 10:06:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.745 10:06:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.745 10:06:23 -- common/autotest_common.sh@10 -- # set +x 00:06:03.745 ************************************ 00:06:03.745 START TEST env 00:06:03.745 ************************************ 00:06:03.745 10:06:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:03.745 * Looking for test storage... 00:06:03.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:03.745 10:06:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:03.745 10:06:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:03.745 10:06:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:03.745 10:06:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:03.745 10:06:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:03.745 10:06:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:03.745 10:06:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:03.745 10:06:23 -- scripts/common.sh@335 -- # IFS=.-: 00:06:03.745 10:06:23 -- scripts/common.sh@335 -- # read -ra ver1 00:06:03.745 10:06:23 -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.745 10:06:23 -- scripts/common.sh@336 -- # read -ra ver2 00:06:03.745 10:06:23 -- scripts/common.sh@337 -- # local 'op=<' 00:06:03.745 10:06:23 -- scripts/common.sh@339 -- # ver1_l=2 00:06:03.745 10:06:23 -- scripts/common.sh@340 -- # ver2_l=1 00:06:03.745 10:06:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:03.745 10:06:23 -- scripts/common.sh@343 -- # case "$op" in 00:06:03.745 10:06:23 -- scripts/common.sh@344 -- # : 1 00:06:03.745 10:06:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:03.745 10:06:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.745 10:06:23 -- scripts/common.sh@364 -- # decimal 1 00:06:03.745 10:06:23 -- scripts/common.sh@352 -- # local d=1 00:06:03.745 10:06:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.745 10:06:23 -- scripts/common.sh@354 -- # echo 1 00:06:03.745 10:06:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:03.745 10:06:23 -- scripts/common.sh@365 -- # decimal 2 00:06:03.745 10:06:23 -- scripts/common.sh@352 -- # local d=2 00:06:03.745 10:06:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.745 10:06:23 -- scripts/common.sh@354 -- # echo 2 00:06:03.745 10:06:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:03.745 10:06:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:03.745 10:06:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:03.745 10:06:23 -- scripts/common.sh@367 -- # return 0 00:06:03.745 10:06:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.745 10:06:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:03.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.745 --rc genhtml_branch_coverage=1 00:06:03.745 --rc genhtml_function_coverage=1 00:06:03.745 --rc genhtml_legend=1 00:06:03.745 --rc geninfo_all_blocks=1 00:06:03.745 --rc geninfo_unexecuted_blocks=1 00:06:03.745 00:06:03.745 ' 00:06:03.745 10:06:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:03.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.745 --rc genhtml_branch_coverage=1 00:06:03.745 --rc genhtml_function_coverage=1 00:06:03.745 --rc genhtml_legend=1 00:06:03.745 --rc geninfo_all_blocks=1 00:06:03.745 --rc geninfo_unexecuted_blocks=1 00:06:03.745 00:06:03.745 ' 00:06:03.745 10:06:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:03.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.745 --rc genhtml_branch_coverage=1 00:06:03.745 --rc genhtml_function_coverage=1 00:06:03.745 --rc genhtml_legend=1 00:06:03.745 --rc geninfo_all_blocks=1 00:06:03.746 --rc geninfo_unexecuted_blocks=1 00:06:03.746 00:06:03.746 ' 00:06:03.746 10:06:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:03.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.746 --rc genhtml_branch_coverage=1 00:06:03.746 --rc genhtml_function_coverage=1 00:06:03.746 --rc genhtml_legend=1 00:06:03.746 --rc geninfo_all_blocks=1 00:06:03.746 --rc geninfo_unexecuted_blocks=1 00:06:03.746 00:06:03.746 ' 00:06:03.746 10:06:23 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:03.746 10:06:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.746 10:06:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.746 10:06:23 -- common/autotest_common.sh@10 -- # set +x 00:06:03.746 ************************************ 00:06:03.746 START TEST env_memory 00:06:03.746 ************************************ 00:06:03.746 10:06:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:03.746 00:06:03.746 00:06:03.746 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.746 http://cunit.sourceforge.net/ 00:06:03.746 00:06:03.746 00:06:03.746 Suite: memory 00:06:04.005 Test: alloc and free memory map ...[2024-11-19 10:06:23.311592] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:04.005 passed 00:06:04.005 Test: mem map translation ...[2024-11-19 10:06:23.343261] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:04.005 [2024-11-19 10:06:23.343320] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:04.005 [2024-11-19 10:06:23.343397] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:04.005 [2024-11-19 10:06:23.343418] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:04.005 passed 00:06:04.005 Test: mem map registration ...[2024-11-19 10:06:23.407871] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:04.005 [2024-11-19 10:06:23.407919] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:04.005 passed 00:06:04.005 Test: mem map adjacent registrations ...passed 00:06:04.005 00:06:04.005 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.005 suites 1 1 n/a 0 0 00:06:04.005 tests 4 4 4 0 0 00:06:04.005 asserts 152 152 152 0 n/a 00:06:04.005 00:06:04.005 Elapsed time = 0.216 seconds 00:06:04.005 00:06:04.005 real 0m0.234s 00:06:04.005 user 0m0.217s 00:06:04.005 sys 0m0.014s 00:06:04.005 10:06:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.005 10:06:23 -- common/autotest_common.sh@10 -- # set +x 00:06:04.005 ************************************ 00:06:04.005 END TEST env_memory 00:06:04.005 ************************************ 00:06:04.005 10:06:23 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:04.005 10:06:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.005 10:06:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.005 10:06:23 -- common/autotest_common.sh@10 -- # set +x 00:06:04.005 ************************************ 00:06:04.005 START TEST env_vtophys 00:06:04.005 ************************************ 00:06:04.005 10:06:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:04.264 EAL: lib.eal log level changed from notice to debug 00:06:04.264 EAL: Detected lcore 0 as core 0 on socket 0 00:06:04.264 EAL: Detected lcore 1 as core 0 on socket 0 00:06:04.264 EAL: Detected lcore 2 as core 0 on socket 0 00:06:04.264 EAL: Detected lcore 3 as core 0 on socket 0 00:06:04.264 EAL: Detected lcore 4 as core 0 on socket 0 00:06:04.264 EAL: Detected lcore 5 as core 0 on socket 0 00:06:04.264 EAL: Detected lcore 6 as core 0 on socket 0 00:06:04.264 EAL: Detected lcore 7 as core 0 on socket 0 00:06:04.264 EAL: Detected lcore 8 as core 0 on socket 0 00:06:04.264 EAL: Detected lcore 9 as core 0 on socket 0 00:06:04.264 EAL: Maximum logical cores by configuration: 128 00:06:04.264 EAL: Detected CPU lcores: 10 00:06:04.264 EAL: Detected NUMA nodes: 1 00:06:04.264 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:04.264 EAL: Detected shared linkage of DPDK 00:06:04.264 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:04.264 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:04.264 EAL: Registered [vdev] bus. 00:06:04.264 EAL: bus.vdev log level changed from disabled to notice 00:06:04.264 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:04.264 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:04.264 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:04.264 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:04.264 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:04.264 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:04.264 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:04.265 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:04.265 EAL: No shared files mode enabled, IPC will be disabled 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: Selected IOVA mode 'PA' 00:06:04.265 EAL: Probing VFIO support... 00:06:04.265 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:04.265 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:04.265 EAL: Ask a virtual area of 0x2e000 bytes 00:06:04.265 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:04.265 EAL: Setting up physically contiguous memory... 00:06:04.265 EAL: Setting maximum number of open files to 524288 00:06:04.265 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:04.265 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:04.265 EAL: Ask a virtual area of 0x61000 bytes 00:06:04.265 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:04.265 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:04.265 EAL: Ask a virtual area of 0x400000000 bytes 00:06:04.265 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:04.265 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:04.265 EAL: Ask a virtual area of 0x61000 bytes 00:06:04.265 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:04.265 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:04.265 EAL: Ask a virtual area of 0x400000000 bytes 00:06:04.265 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:04.265 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:04.265 EAL: Ask a virtual area of 0x61000 bytes 00:06:04.265 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:04.265 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:04.265 EAL: Ask a virtual area of 0x400000000 bytes 00:06:04.265 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:04.265 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:04.265 EAL: Ask a virtual area of 0x61000 bytes 00:06:04.265 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:04.265 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:04.265 EAL: Ask a virtual area of 0x400000000 bytes 00:06:04.265 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:04.265 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:04.265 EAL: Hugepages will be freed exactly as allocated. 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: TSC frequency is ~2200000 KHz 00:06:04.265 EAL: Main lcore 0 is ready (tid=7ff11af09a00;cpuset=[0]) 00:06:04.265 EAL: Trying to obtain current memory policy. 00:06:04.265 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.265 EAL: Restoring previous memory policy: 0 00:06:04.265 EAL: request: mp_malloc_sync 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: Heap on socket 0 was expanded by 2MB 00:06:04.265 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:04.265 EAL: Mem event callback 'spdk:(nil)' registered 00:06:04.265 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:04.265 00:06:04.265 00:06:04.265 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.265 http://cunit.sourceforge.net/ 00:06:04.265 00:06:04.265 00:06:04.265 Suite: components_suite 00:06:04.265 Test: vtophys_malloc_test ...passed 00:06:04.265 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:04.265 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.265 EAL: Restoring previous memory policy: 4 00:06:04.265 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.265 EAL: request: mp_malloc_sync 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: Heap on socket 0 was expanded by 4MB 00:06:04.265 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.265 EAL: request: mp_malloc_sync 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: Heap on socket 0 was shrunk by 4MB 00:06:04.265 EAL: Trying to obtain current memory policy. 00:06:04.265 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.265 EAL: Restoring previous memory policy: 4 00:06:04.265 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.265 EAL: request: mp_malloc_sync 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: Heap on socket 0 was expanded by 6MB 00:06:04.265 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.265 EAL: request: mp_malloc_sync 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: Heap on socket 0 was shrunk by 6MB 00:06:04.265 EAL: Trying to obtain current memory policy. 00:06:04.265 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.265 EAL: Restoring previous memory policy: 4 00:06:04.265 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.265 EAL: request: mp_malloc_sync 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: Heap on socket 0 was expanded by 10MB 00:06:04.265 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.265 EAL: request: mp_malloc_sync 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: Heap on socket 0 was shrunk by 10MB 00:06:04.265 EAL: Trying to obtain current memory policy. 00:06:04.265 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.265 EAL: Restoring previous memory policy: 4 00:06:04.265 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.265 EAL: request: mp_malloc_sync 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: Heap on socket 0 was expanded by 18MB 00:06:04.265 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.265 EAL: request: mp_malloc_sync 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: Heap on socket 0 was shrunk by 18MB 00:06:04.265 EAL: Trying to obtain current memory policy. 00:06:04.265 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.265 EAL: Restoring previous memory policy: 4 00:06:04.265 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.265 EAL: request: mp_malloc_sync 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: Heap on socket 0 was expanded by 34MB 00:06:04.265 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.265 EAL: request: mp_malloc_sync 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: Heap on socket 0 was shrunk by 34MB 00:06:04.265 EAL: Trying to obtain current memory policy. 00:06:04.265 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.265 EAL: Restoring previous memory policy: 4 00:06:04.265 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.265 EAL: request: mp_malloc_sync 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: Heap on socket 0 was expanded by 66MB 00:06:04.265 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.265 EAL: request: mp_malloc_sync 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: Heap on socket 0 was shrunk by 66MB 00:06:04.265 EAL: Trying to obtain current memory policy. 00:06:04.265 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.265 EAL: Restoring previous memory policy: 4 00:06:04.265 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.265 EAL: request: mp_malloc_sync 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: Heap on socket 0 was expanded by 130MB 00:06:04.265 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.265 EAL: request: mp_malloc_sync 00:06:04.265 EAL: No shared files mode enabled, IPC is disabled 00:06:04.265 EAL: Heap on socket 0 was shrunk by 130MB 00:06:04.265 EAL: Trying to obtain current memory policy. 00:06:04.265 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.524 EAL: Restoring previous memory policy: 4 00:06:04.524 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.524 EAL: request: mp_malloc_sync 00:06:04.524 EAL: No shared files mode enabled, IPC is disabled 00:06:04.524 EAL: Heap on socket 0 was expanded by 258MB 00:06:04.524 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.524 EAL: request: mp_malloc_sync 00:06:04.524 EAL: No shared files mode enabled, IPC is disabled 00:06:04.524 EAL: Heap on socket 0 was shrunk by 258MB 00:06:04.524 EAL: Trying to obtain current memory policy. 00:06:04.525 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.525 EAL: Restoring previous memory policy: 4 00:06:04.525 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.525 EAL: request: mp_malloc_sync 00:06:04.525 EAL: No shared files mode enabled, IPC is disabled 00:06:04.525 EAL: Heap on socket 0 was expanded by 514MB 00:06:04.525 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.783 EAL: request: mp_malloc_sync 00:06:04.783 EAL: No shared files mode enabled, IPC is disabled 00:06:04.783 EAL: Heap on socket 0 was shrunk by 514MB 00:06:04.783 EAL: Trying to obtain current memory policy. 00:06:04.783 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:04.783 EAL: Restoring previous memory policy: 4 00:06:04.783 EAL: Calling mem event callback 'spdk:(nil)' 00:06:04.783 EAL: request: mp_malloc_sync 00:06:04.783 EAL: No shared files mode enabled, IPC is disabled 00:06:04.783 EAL: Heap on socket 0 was expanded by 1026MB 00:06:04.783 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.043 passed 00:06:05.043 00:06:05.043 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.043 suites 1 1 n/a 0 0 00:06:05.043 tests 2 2 2 0 0 00:06:05.043 asserts 5302 5302 5302 0 n/a 00:06:05.043 00:06:05.043 Elapsed time = 0.691 seconds 00:06:05.043 EAL: request: mp_malloc_sync 00:06:05.043 EAL: No shared files mode enabled, IPC is disabled 00:06:05.043 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:05.043 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.043 EAL: request: mp_malloc_sync 00:06:05.043 EAL: No shared files mode enabled, IPC is disabled 00:06:05.043 EAL: Heap on socket 0 was shrunk by 2MB 00:06:05.043 EAL: No shared files mode enabled, IPC is disabled 00:06:05.043 EAL: No shared files mode enabled, IPC is disabled 00:06:05.043 EAL: No shared files mode enabled, IPC is disabled 00:06:05.043 00:06:05.043 real 0m0.881s 00:06:05.043 user 0m0.450s 00:06:05.043 sys 0m0.301s 00:06:05.043 10:06:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.043 10:06:24 -- common/autotest_common.sh@10 -- # set +x 00:06:05.043 ************************************ 00:06:05.043 END TEST env_vtophys 00:06:05.043 ************************************ 00:06:05.043 10:06:24 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:05.043 10:06:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.043 10:06:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.043 10:06:24 -- common/autotest_common.sh@10 -- # set +x 00:06:05.043 ************************************ 00:06:05.043 START TEST env_pci 00:06:05.043 ************************************ 00:06:05.043 10:06:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:05.043 00:06:05.043 00:06:05.043 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.043 http://cunit.sourceforge.net/ 00:06:05.043 00:06:05.043 00:06:05.043 Suite: pci 00:06:05.043 Test: pci_hook ...[2024-11-19 10:06:24.479924] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67224 has claimed it 00:06:05.043 passed 00:06:05.043 00:06:05.043 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.043 suites 1 1 n/a 0 0 00:06:05.043 tests 1 1 1 0 0 00:06:05.043 asserts 25 25 25 0 n/a 00:06:05.043 00:06:05.043 Elapsed time = 0.002 seconds 00:06:05.043 EAL: Cannot find device (10000:00:01.0) 00:06:05.043 EAL: Failed to attach device on primary process 00:06:05.043 00:06:05.043 real 0m0.016s 00:06:05.043 user 0m0.006s 00:06:05.043 sys 0m0.010s 00:06:05.043 10:06:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.043 10:06:24 -- common/autotest_common.sh@10 -- # set +x 00:06:05.043 ************************************ 00:06:05.043 END TEST env_pci 00:06:05.043 ************************************ 00:06:05.043 10:06:24 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:05.043 10:06:24 -- env/env.sh@15 -- # uname 00:06:05.043 10:06:24 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:05.043 10:06:24 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:05.043 10:06:24 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:05.043 10:06:24 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:06:05.043 10:06:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.043 10:06:24 -- common/autotest_common.sh@10 -- # set +x 00:06:05.043 ************************************ 00:06:05.043 START TEST env_dpdk_post_init 00:06:05.043 ************************************ 00:06:05.043 10:06:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:05.043 EAL: Detected CPU lcores: 10 00:06:05.043 EAL: Detected NUMA nodes: 1 00:06:05.043 EAL: Detected shared linkage of DPDK 00:06:05.043 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:05.043 EAL: Selected IOVA mode 'PA' 00:06:05.302 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:05.302 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:06:05.302 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:06:05.302 Starting DPDK initialization... 00:06:05.302 Starting SPDK post initialization... 00:06:05.302 SPDK NVMe probe 00:06:05.302 Attaching to 0000:00:06.0 00:06:05.302 Attaching to 0000:00:07.0 00:06:05.302 Attached to 0000:00:06.0 00:06:05.302 Attached to 0000:00:07.0 00:06:05.302 Cleaning up... 00:06:05.302 00:06:05.302 real 0m0.177s 00:06:05.302 user 0m0.043s 00:06:05.302 sys 0m0.034s 00:06:05.302 10:06:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.302 10:06:24 -- common/autotest_common.sh@10 -- # set +x 00:06:05.302 ************************************ 00:06:05.302 END TEST env_dpdk_post_init 00:06:05.302 ************************************ 00:06:05.302 10:06:24 -- env/env.sh@26 -- # uname 00:06:05.302 10:06:24 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:05.302 10:06:24 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:05.302 10:06:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.302 10:06:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.302 10:06:24 -- common/autotest_common.sh@10 -- # set +x 00:06:05.302 ************************************ 00:06:05.302 START TEST env_mem_callbacks 00:06:05.302 ************************************ 00:06:05.302 10:06:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:05.302 EAL: Detected CPU lcores: 10 00:06:05.302 EAL: Detected NUMA nodes: 1 00:06:05.302 EAL: Detected shared linkage of DPDK 00:06:05.302 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:05.302 EAL: Selected IOVA mode 'PA' 00:06:05.561 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:05.561 00:06:05.561 00:06:05.561 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.561 http://cunit.sourceforge.net/ 00:06:05.561 00:06:05.561 00:06:05.561 Suite: memory 00:06:05.561 Test: test ... 00:06:05.561 register 0x200000200000 2097152 00:06:05.561 malloc 3145728 00:06:05.561 register 0x200000400000 4194304 00:06:05.561 buf 0x200000500000 len 3145728 PASSED 00:06:05.561 malloc 64 00:06:05.561 buf 0x2000004fff40 len 64 PASSED 00:06:05.561 malloc 4194304 00:06:05.561 register 0x200000800000 6291456 00:06:05.561 buf 0x200000a00000 len 4194304 PASSED 00:06:05.561 free 0x200000500000 3145728 00:06:05.561 free 0x2000004fff40 64 00:06:05.561 unregister 0x200000400000 4194304 PASSED 00:06:05.561 free 0x200000a00000 4194304 00:06:05.561 unregister 0x200000800000 6291456 PASSED 00:06:05.561 malloc 8388608 00:06:05.561 register 0x200000400000 10485760 00:06:05.561 buf 0x200000600000 len 8388608 PASSED 00:06:05.561 free 0x200000600000 8388608 00:06:05.561 unregister 0x200000400000 10485760 PASSED 00:06:05.561 passed 00:06:05.561 00:06:05.561 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.561 suites 1 1 n/a 0 0 00:06:05.561 tests 1 1 1 0 0 00:06:05.561 asserts 15 15 15 0 n/a 00:06:05.561 00:06:05.561 Elapsed time = 0.006 seconds 00:06:05.561 00:06:05.561 real 0m0.134s 00:06:05.561 user 0m0.013s 00:06:05.561 sys 0m0.019s 00:06:05.561 10:06:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.562 10:06:24 -- common/autotest_common.sh@10 -- # set +x 00:06:05.562 ************************************ 00:06:05.562 END TEST env_mem_callbacks 00:06:05.562 ************************************ 00:06:05.562 00:06:05.562 real 0m1.877s 00:06:05.562 user 0m0.921s 00:06:05.562 sys 0m0.619s 00:06:05.562 10:06:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.562 10:06:24 -- common/autotest_common.sh@10 -- # set +x 00:06:05.562 ************************************ 00:06:05.562 END TEST env 00:06:05.562 ************************************ 00:06:05.562 10:06:24 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:05.562 10:06:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.562 10:06:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.562 10:06:24 -- common/autotest_common.sh@10 -- # set +x 00:06:05.562 ************************************ 00:06:05.562 START TEST rpc 00:06:05.562 ************************************ 00:06:05.562 10:06:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:05.562 * Looking for test storage... 00:06:05.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:05.562 10:06:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:05.562 10:06:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:05.562 10:06:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:05.923 10:06:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:05.923 10:06:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:05.923 10:06:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:05.923 10:06:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:05.923 10:06:25 -- scripts/common.sh@335 -- # IFS=.-: 00:06:05.923 10:06:25 -- scripts/common.sh@335 -- # read -ra ver1 00:06:05.923 10:06:25 -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.923 10:06:25 -- scripts/common.sh@336 -- # read -ra ver2 00:06:05.923 10:06:25 -- scripts/common.sh@337 -- # local 'op=<' 00:06:05.923 10:06:25 -- scripts/common.sh@339 -- # ver1_l=2 00:06:05.923 10:06:25 -- scripts/common.sh@340 -- # ver2_l=1 00:06:05.923 10:06:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:05.923 10:06:25 -- scripts/common.sh@343 -- # case "$op" in 00:06:05.923 10:06:25 -- scripts/common.sh@344 -- # : 1 00:06:05.923 10:06:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:05.923 10:06:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.923 10:06:25 -- scripts/common.sh@364 -- # decimal 1 00:06:05.923 10:06:25 -- scripts/common.sh@352 -- # local d=1 00:06:05.923 10:06:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.923 10:06:25 -- scripts/common.sh@354 -- # echo 1 00:06:05.923 10:06:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:05.923 10:06:25 -- scripts/common.sh@365 -- # decimal 2 00:06:05.923 10:06:25 -- scripts/common.sh@352 -- # local d=2 00:06:05.923 10:06:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.923 10:06:25 -- scripts/common.sh@354 -- # echo 2 00:06:05.923 10:06:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:05.923 10:06:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:05.923 10:06:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:05.923 10:06:25 -- scripts/common.sh@367 -- # return 0 00:06:05.923 10:06:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.923 10:06:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:05.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.923 --rc genhtml_branch_coverage=1 00:06:05.923 --rc genhtml_function_coverage=1 00:06:05.923 --rc genhtml_legend=1 00:06:05.923 --rc geninfo_all_blocks=1 00:06:05.923 --rc geninfo_unexecuted_blocks=1 00:06:05.923 00:06:05.923 ' 00:06:05.923 10:06:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:05.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.923 --rc genhtml_branch_coverage=1 00:06:05.923 --rc genhtml_function_coverage=1 00:06:05.923 --rc genhtml_legend=1 00:06:05.923 --rc geninfo_all_blocks=1 00:06:05.923 --rc geninfo_unexecuted_blocks=1 00:06:05.923 00:06:05.923 ' 00:06:05.923 10:06:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:05.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.923 --rc genhtml_branch_coverage=1 00:06:05.923 --rc genhtml_function_coverage=1 00:06:05.923 --rc genhtml_legend=1 00:06:05.923 --rc geninfo_all_blocks=1 00:06:05.923 --rc geninfo_unexecuted_blocks=1 00:06:05.923 00:06:05.923 ' 00:06:05.923 10:06:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:05.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.923 --rc genhtml_branch_coverage=1 00:06:05.923 --rc genhtml_function_coverage=1 00:06:05.923 --rc genhtml_legend=1 00:06:05.923 --rc geninfo_all_blocks=1 00:06:05.923 --rc geninfo_unexecuted_blocks=1 00:06:05.923 00:06:05.923 ' 00:06:05.923 10:06:25 -- rpc/rpc.sh@65 -- # spdk_pid=67346 00:06:05.923 10:06:25 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.923 10:06:25 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:05.923 10:06:25 -- rpc/rpc.sh@67 -- # waitforlisten 67346 00:06:05.923 10:06:25 -- common/autotest_common.sh@829 -- # '[' -z 67346 ']' 00:06:05.923 10:06:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.923 10:06:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.924 10:06:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.924 10:06:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.924 10:06:25 -- common/autotest_common.sh@10 -- # set +x 00:06:05.924 [2024-11-19 10:06:25.225203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:05.924 [2024-11-19 10:06:25.225304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67346 ] 00:06:05.924 [2024-11-19 10:06:25.365720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.924 [2024-11-19 10:06:25.403688] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.924 [2024-11-19 10:06:25.403874] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:05.924 [2024-11-19 10:06:25.403893] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67346' to capture a snapshot of events at runtime. 00:06:05.924 [2024-11-19 10:06:25.403905] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67346 for offline analysis/debug. 00:06:05.924 [2024-11-19 10:06:25.403941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.860 10:06:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.860 10:06:26 -- common/autotest_common.sh@862 -- # return 0 00:06:06.860 10:06:26 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:06.860 10:06:26 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:06.860 10:06:26 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:06.860 10:06:26 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:06.860 10:06:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.860 10:06:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.860 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:06.860 ************************************ 00:06:06.860 START TEST rpc_integrity 00:06:06.860 ************************************ 00:06:06.860 10:06:26 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:06:06.860 10:06:26 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:06.860 10:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.860 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:06.860 10:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.860 10:06:26 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:06.860 10:06:26 -- rpc/rpc.sh@13 -- # jq length 00:06:06.860 10:06:26 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:06.860 10:06:26 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:06.860 10:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.860 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:06.860 10:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.860 10:06:26 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:06.860 10:06:26 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:06.860 10:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.860 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:06.860 10:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.860 10:06:26 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:06.860 { 00:06:06.860 "aliases": [ 00:06:06.860 "e6e0fcc5-127f-4539-ad8c-293254f625fc" 00:06:06.860 ], 00:06:06.860 "assigned_rate_limits": { 00:06:06.860 "r_mbytes_per_sec": 0, 00:06:06.860 "rw_ios_per_sec": 0, 00:06:06.860 "rw_mbytes_per_sec": 0, 00:06:06.860 "w_mbytes_per_sec": 0 00:06:06.860 }, 00:06:06.860 "block_size": 512, 00:06:06.860 "claimed": false, 00:06:06.860 "driver_specific": {}, 00:06:06.860 "memory_domains": [ 00:06:06.860 { 00:06:06.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:06.860 "dma_device_type": 2 00:06:06.860 } 00:06:06.860 ], 00:06:06.860 "name": "Malloc0", 00:06:06.860 "num_blocks": 16384, 00:06:06.860 "product_name": "Malloc disk", 00:06:06.860 "supported_io_types": { 00:06:06.860 "abort": true, 00:06:06.860 "compare": false, 00:06:06.860 "compare_and_write": false, 00:06:06.860 "flush": true, 00:06:06.860 "nvme_admin": false, 00:06:06.860 "nvme_io": false, 00:06:06.860 "read": true, 00:06:06.860 "reset": true, 00:06:06.860 "unmap": true, 00:06:06.860 "write": true, 00:06:06.860 "write_zeroes": true 00:06:06.860 }, 00:06:06.860 "uuid": "e6e0fcc5-127f-4539-ad8c-293254f625fc", 00:06:06.860 "zoned": false 00:06:06.860 } 00:06:06.860 ]' 00:06:06.860 10:06:26 -- rpc/rpc.sh@17 -- # jq length 00:06:07.118 10:06:26 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:07.118 10:06:26 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:07.118 10:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.118 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:07.118 [2024-11-19 10:06:26.451552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:07.118 [2024-11-19 10:06:26.451604] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:07.118 [2024-11-19 10:06:26.451623] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x204f490 00:06:07.118 [2024-11-19 10:06:26.451633] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:07.118 [2024-11-19 10:06:26.453271] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:07.118 [2024-11-19 10:06:26.453310] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:07.118 Passthru0 00:06:07.118 10:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.118 10:06:26 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:07.118 10:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.118 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:07.118 10:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.118 10:06:26 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:07.118 { 00:06:07.118 "aliases": [ 00:06:07.118 "e6e0fcc5-127f-4539-ad8c-293254f625fc" 00:06:07.118 ], 00:06:07.118 "assigned_rate_limits": { 00:06:07.118 "r_mbytes_per_sec": 0, 00:06:07.118 "rw_ios_per_sec": 0, 00:06:07.118 "rw_mbytes_per_sec": 0, 00:06:07.118 "w_mbytes_per_sec": 0 00:06:07.118 }, 00:06:07.118 "block_size": 512, 00:06:07.118 "claim_type": "exclusive_write", 00:06:07.118 "claimed": true, 00:06:07.118 "driver_specific": {}, 00:06:07.118 "memory_domains": [ 00:06:07.118 { 00:06:07.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.118 "dma_device_type": 2 00:06:07.118 } 00:06:07.118 ], 00:06:07.118 "name": "Malloc0", 00:06:07.118 "num_blocks": 16384, 00:06:07.118 "product_name": "Malloc disk", 00:06:07.118 "supported_io_types": { 00:06:07.118 "abort": true, 00:06:07.118 "compare": false, 00:06:07.118 "compare_and_write": false, 00:06:07.118 "flush": true, 00:06:07.118 "nvme_admin": false, 00:06:07.118 "nvme_io": false, 00:06:07.118 "read": true, 00:06:07.118 "reset": true, 00:06:07.118 "unmap": true, 00:06:07.118 "write": true, 00:06:07.118 "write_zeroes": true 00:06:07.118 }, 00:06:07.118 "uuid": "e6e0fcc5-127f-4539-ad8c-293254f625fc", 00:06:07.118 "zoned": false 00:06:07.118 }, 00:06:07.119 { 00:06:07.119 "aliases": [ 00:06:07.119 "6c6e5f3a-93d9-5ef4-96fa-7962e14f060c" 00:06:07.119 ], 00:06:07.119 "assigned_rate_limits": { 00:06:07.119 "r_mbytes_per_sec": 0, 00:06:07.119 "rw_ios_per_sec": 0, 00:06:07.119 "rw_mbytes_per_sec": 0, 00:06:07.119 "w_mbytes_per_sec": 0 00:06:07.119 }, 00:06:07.119 "block_size": 512, 00:06:07.119 "claimed": false, 00:06:07.119 "driver_specific": { 00:06:07.119 "passthru": { 00:06:07.119 "base_bdev_name": "Malloc0", 00:06:07.119 "name": "Passthru0" 00:06:07.119 } 00:06:07.119 }, 00:06:07.119 "memory_domains": [ 00:06:07.119 { 00:06:07.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.119 "dma_device_type": 2 00:06:07.119 } 00:06:07.119 ], 00:06:07.119 "name": "Passthru0", 00:06:07.119 "num_blocks": 16384, 00:06:07.119 "product_name": "passthru", 00:06:07.119 "supported_io_types": { 00:06:07.119 "abort": true, 00:06:07.119 "compare": false, 00:06:07.119 "compare_and_write": false, 00:06:07.119 "flush": true, 00:06:07.119 "nvme_admin": false, 00:06:07.119 "nvme_io": false, 00:06:07.119 "read": true, 00:06:07.119 "reset": true, 00:06:07.119 "unmap": true, 00:06:07.119 "write": true, 00:06:07.119 "write_zeroes": true 00:06:07.119 }, 00:06:07.119 "uuid": "6c6e5f3a-93d9-5ef4-96fa-7962e14f060c", 00:06:07.119 "zoned": false 00:06:07.119 } 00:06:07.119 ]' 00:06:07.119 10:06:26 -- rpc/rpc.sh@21 -- # jq length 00:06:07.119 10:06:26 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:07.119 10:06:26 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:07.119 10:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.119 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:07.119 10:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.119 10:06:26 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:07.119 10:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.119 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:07.119 10:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.119 10:06:26 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:07.119 10:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.119 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:07.119 10:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.119 10:06:26 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:07.119 10:06:26 -- rpc/rpc.sh@26 -- # jq length 00:06:07.119 10:06:26 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:07.119 00:06:07.119 real 0m0.306s 00:06:07.119 user 0m0.204s 00:06:07.119 sys 0m0.036s 00:06:07.119 10:06:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.119 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:07.119 ************************************ 00:06:07.119 END TEST rpc_integrity 00:06:07.119 ************************************ 00:06:07.119 10:06:26 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:07.119 10:06:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.119 10:06:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.119 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:07.119 ************************************ 00:06:07.119 START TEST rpc_plugins 00:06:07.119 ************************************ 00:06:07.119 10:06:26 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:06:07.119 10:06:26 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:07.119 10:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.119 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:07.377 10:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.377 10:06:26 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:07.377 10:06:26 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:07.377 10:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.377 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:07.377 10:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.377 10:06:26 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:07.377 { 00:06:07.377 "aliases": [ 00:06:07.377 "32eec2f4-a80d-4efb-8f70-2a6d7b3ba6e2" 00:06:07.377 ], 00:06:07.377 "assigned_rate_limits": { 00:06:07.377 "r_mbytes_per_sec": 0, 00:06:07.377 "rw_ios_per_sec": 0, 00:06:07.377 "rw_mbytes_per_sec": 0, 00:06:07.377 "w_mbytes_per_sec": 0 00:06:07.377 }, 00:06:07.377 "block_size": 4096, 00:06:07.377 "claimed": false, 00:06:07.377 "driver_specific": {}, 00:06:07.377 "memory_domains": [ 00:06:07.377 { 00:06:07.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.377 "dma_device_type": 2 00:06:07.377 } 00:06:07.377 ], 00:06:07.377 "name": "Malloc1", 00:06:07.377 "num_blocks": 256, 00:06:07.377 "product_name": "Malloc disk", 00:06:07.377 "supported_io_types": { 00:06:07.377 "abort": true, 00:06:07.377 "compare": false, 00:06:07.377 "compare_and_write": false, 00:06:07.377 "flush": true, 00:06:07.377 "nvme_admin": false, 00:06:07.377 "nvme_io": false, 00:06:07.377 "read": true, 00:06:07.377 "reset": true, 00:06:07.377 "unmap": true, 00:06:07.377 "write": true, 00:06:07.377 "write_zeroes": true 00:06:07.377 }, 00:06:07.377 "uuid": "32eec2f4-a80d-4efb-8f70-2a6d7b3ba6e2", 00:06:07.377 "zoned": false 00:06:07.377 } 00:06:07.377 ]' 00:06:07.377 10:06:26 -- rpc/rpc.sh@32 -- # jq length 00:06:07.377 10:06:26 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:07.377 10:06:26 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:07.377 10:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.377 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:07.377 10:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.377 10:06:26 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:07.377 10:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.377 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:07.377 10:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.377 10:06:26 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:07.377 10:06:26 -- rpc/rpc.sh@36 -- # jq length 00:06:07.377 10:06:26 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:07.377 00:06:07.377 real 0m0.155s 00:06:07.377 user 0m0.099s 00:06:07.377 sys 0m0.021s 00:06:07.377 10:06:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.377 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:07.377 ************************************ 00:06:07.377 END TEST rpc_plugins 00:06:07.377 ************************************ 00:06:07.377 10:06:26 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:07.377 10:06:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.377 10:06:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.377 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:07.377 ************************************ 00:06:07.377 START TEST rpc_trace_cmd_test 00:06:07.377 ************************************ 00:06:07.377 10:06:26 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:06:07.377 10:06:26 -- rpc/rpc.sh@40 -- # local info 00:06:07.377 10:06:26 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:07.377 10:06:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.377 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:07.377 10:06:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.377 10:06:26 -- rpc/rpc.sh@42 -- # info='{ 00:06:07.377 "bdev": { 00:06:07.377 "mask": "0x8", 00:06:07.377 "tpoint_mask": "0xffffffffffffffff" 00:06:07.377 }, 00:06:07.377 "bdev_nvme": { 00:06:07.377 "mask": "0x4000", 00:06:07.377 "tpoint_mask": "0x0" 00:06:07.377 }, 00:06:07.377 "blobfs": { 00:06:07.377 "mask": "0x80", 00:06:07.377 "tpoint_mask": "0x0" 00:06:07.377 }, 00:06:07.377 "dsa": { 00:06:07.377 "mask": "0x200", 00:06:07.377 "tpoint_mask": "0x0" 00:06:07.377 }, 00:06:07.377 "ftl": { 00:06:07.377 "mask": "0x40", 00:06:07.377 "tpoint_mask": "0x0" 00:06:07.377 }, 00:06:07.377 "iaa": { 00:06:07.377 "mask": "0x1000", 00:06:07.377 "tpoint_mask": "0x0" 00:06:07.377 }, 00:06:07.377 "iscsi_conn": { 00:06:07.377 "mask": "0x2", 00:06:07.377 "tpoint_mask": "0x0" 00:06:07.377 }, 00:06:07.377 "nvme_pcie": { 00:06:07.377 "mask": "0x800", 00:06:07.378 "tpoint_mask": "0x0" 00:06:07.378 }, 00:06:07.378 "nvme_tcp": { 00:06:07.378 "mask": "0x2000", 00:06:07.378 "tpoint_mask": "0x0" 00:06:07.378 }, 00:06:07.378 "nvmf_rdma": { 00:06:07.378 "mask": "0x10", 00:06:07.378 "tpoint_mask": "0x0" 00:06:07.378 }, 00:06:07.378 "nvmf_tcp": { 00:06:07.378 "mask": "0x20", 00:06:07.378 "tpoint_mask": "0x0" 00:06:07.378 }, 00:06:07.378 "scsi": { 00:06:07.378 "mask": "0x4", 00:06:07.378 "tpoint_mask": "0x0" 00:06:07.378 }, 00:06:07.378 "thread": { 00:06:07.378 "mask": "0x400", 00:06:07.378 "tpoint_mask": "0x0" 00:06:07.378 }, 00:06:07.378 "tpoint_group_mask": "0x8", 00:06:07.378 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67346" 00:06:07.378 }' 00:06:07.378 10:06:26 -- rpc/rpc.sh@43 -- # jq length 00:06:07.637 10:06:26 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:06:07.637 10:06:26 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:07.637 10:06:26 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:07.637 10:06:26 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:07.637 10:06:27 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:07.637 10:06:27 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:07.637 10:06:27 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:07.637 10:06:27 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:07.637 10:06:27 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:07.637 00:06:07.637 real 0m0.263s 00:06:07.637 user 0m0.221s 00:06:07.637 sys 0m0.031s 00:06:07.637 ************************************ 00:06:07.637 END TEST rpc_trace_cmd_test 00:06:07.637 ************************************ 00:06:07.637 10:06:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.637 10:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:07.637 10:06:27 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:06:07.637 10:06:27 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:06:07.637 10:06:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.637 10:06:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.637 10:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:07.637 ************************************ 00:06:07.637 START TEST go_rpc 00:06:07.637 ************************************ 00:06:07.637 10:06:27 -- common/autotest_common.sh@1114 -- # go_rpc 00:06:07.637 10:06:27 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:07.637 10:06:27 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:06:07.637 10:06:27 -- rpc/rpc.sh@52 -- # jq length 00:06:07.896 10:06:27 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:06:07.896 10:06:27 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:06:07.896 10:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.896 10:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:07.896 10:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.896 10:06:27 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:06:07.896 10:06:27 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:07.896 10:06:27 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["57d02d36-963a-4219-a030-90b214caf61d"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"57d02d36-963a-4219-a030-90b214caf61d","zoned":false}]' 00:06:07.896 10:06:27 -- rpc/rpc.sh@57 -- # jq length 00:06:07.896 10:06:27 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:06:07.896 10:06:27 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:07.896 10:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.896 10:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:07.896 10:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.896 10:06:27 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:06:07.896 10:06:27 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:06:07.896 10:06:27 -- rpc/rpc.sh@61 -- # jq length 00:06:07.896 10:06:27 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:06:07.896 00:06:07.896 real 0m0.212s 00:06:07.896 user 0m0.148s 00:06:07.896 sys 0m0.030s 00:06:07.896 10:06:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.896 10:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:07.896 ************************************ 00:06:07.896 END TEST go_rpc 00:06:07.896 ************************************ 00:06:07.896 10:06:27 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:07.896 10:06:27 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:07.896 10:06:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.896 10:06:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.896 10:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:07.896 ************************************ 00:06:07.896 START TEST rpc_daemon_integrity 00:06:07.896 ************************************ 00:06:07.896 10:06:27 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:06:07.896 10:06:27 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:07.896 10:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.896 10:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:07.896 10:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.896 10:06:27 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:07.896 10:06:27 -- rpc/rpc.sh@13 -- # jq length 00:06:08.154 10:06:27 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:08.154 10:06:27 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:08.154 10:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.154 10:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:08.154 10:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.154 10:06:27 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:06:08.154 10:06:27 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:08.154 10:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.154 10:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:08.154 10:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.154 10:06:27 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:08.154 { 00:06:08.154 "aliases": [ 00:06:08.154 "0140b757-5d80-4cda-b3e8-8d8beeb1366f" 00:06:08.154 ], 00:06:08.154 "assigned_rate_limits": { 00:06:08.154 "r_mbytes_per_sec": 0, 00:06:08.154 "rw_ios_per_sec": 0, 00:06:08.154 "rw_mbytes_per_sec": 0, 00:06:08.154 "w_mbytes_per_sec": 0 00:06:08.154 }, 00:06:08.154 "block_size": 512, 00:06:08.154 "claimed": false, 00:06:08.154 "driver_specific": {}, 00:06:08.154 "memory_domains": [ 00:06:08.154 { 00:06:08.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.154 "dma_device_type": 2 00:06:08.154 } 00:06:08.154 ], 00:06:08.154 "name": "Malloc3", 00:06:08.154 "num_blocks": 16384, 00:06:08.154 "product_name": "Malloc disk", 00:06:08.154 "supported_io_types": { 00:06:08.154 "abort": true, 00:06:08.154 "compare": false, 00:06:08.154 "compare_and_write": false, 00:06:08.154 "flush": true, 00:06:08.154 "nvme_admin": false, 00:06:08.154 "nvme_io": false, 00:06:08.154 "read": true, 00:06:08.154 "reset": true, 00:06:08.154 "unmap": true, 00:06:08.154 "write": true, 00:06:08.154 "write_zeroes": true 00:06:08.154 }, 00:06:08.154 "uuid": "0140b757-5d80-4cda-b3e8-8d8beeb1366f", 00:06:08.154 "zoned": false 00:06:08.154 } 00:06:08.154 ]' 00:06:08.154 10:06:27 -- rpc/rpc.sh@17 -- # jq length 00:06:08.154 10:06:27 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:08.154 10:06:27 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:06:08.154 10:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.154 10:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:08.154 [2024-11-19 10:06:27.567995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:08.154 [2024-11-19 10:06:27.568046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:08.154 [2024-11-19 10:06:27.568066] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ea21d0 00:06:08.154 [2024-11-19 10:06:27.568076] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:08.154 [2024-11-19 10:06:27.569459] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:08.154 [2024-11-19 10:06:27.569505] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:08.154 Passthru0 00:06:08.154 10:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.154 10:06:27 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:08.154 10:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.154 10:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:08.154 10:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.154 10:06:27 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:08.154 { 00:06:08.154 "aliases": [ 00:06:08.154 "0140b757-5d80-4cda-b3e8-8d8beeb1366f" 00:06:08.154 ], 00:06:08.154 "assigned_rate_limits": { 00:06:08.154 "r_mbytes_per_sec": 0, 00:06:08.154 "rw_ios_per_sec": 0, 00:06:08.154 "rw_mbytes_per_sec": 0, 00:06:08.154 "w_mbytes_per_sec": 0 00:06:08.154 }, 00:06:08.154 "block_size": 512, 00:06:08.154 "claim_type": "exclusive_write", 00:06:08.154 "claimed": true, 00:06:08.154 "driver_specific": {}, 00:06:08.154 "memory_domains": [ 00:06:08.154 { 00:06:08.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.154 "dma_device_type": 2 00:06:08.154 } 00:06:08.155 ], 00:06:08.155 "name": "Malloc3", 00:06:08.155 "num_blocks": 16384, 00:06:08.155 "product_name": "Malloc disk", 00:06:08.155 "supported_io_types": { 00:06:08.155 "abort": true, 00:06:08.155 "compare": false, 00:06:08.155 "compare_and_write": false, 00:06:08.155 "flush": true, 00:06:08.155 "nvme_admin": false, 00:06:08.155 "nvme_io": false, 00:06:08.155 "read": true, 00:06:08.155 "reset": true, 00:06:08.155 "unmap": true, 00:06:08.155 "write": true, 00:06:08.155 "write_zeroes": true 00:06:08.155 }, 00:06:08.155 "uuid": "0140b757-5d80-4cda-b3e8-8d8beeb1366f", 00:06:08.155 "zoned": false 00:06:08.155 }, 00:06:08.155 { 00:06:08.155 "aliases": [ 00:06:08.155 "a3ec5da1-c930-5d36-b14a-a35a81437b37" 00:06:08.155 ], 00:06:08.155 "assigned_rate_limits": { 00:06:08.155 "r_mbytes_per_sec": 0, 00:06:08.155 "rw_ios_per_sec": 0, 00:06:08.155 "rw_mbytes_per_sec": 0, 00:06:08.155 "w_mbytes_per_sec": 0 00:06:08.155 }, 00:06:08.155 "block_size": 512, 00:06:08.155 "claimed": false, 00:06:08.155 "driver_specific": { 00:06:08.155 "passthru": { 00:06:08.155 "base_bdev_name": "Malloc3", 00:06:08.155 "name": "Passthru0" 00:06:08.155 } 00:06:08.155 }, 00:06:08.155 "memory_domains": [ 00:06:08.155 { 00:06:08.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.155 "dma_device_type": 2 00:06:08.155 } 00:06:08.155 ], 00:06:08.155 "name": "Passthru0", 00:06:08.155 "num_blocks": 16384, 00:06:08.155 "product_name": "passthru", 00:06:08.155 "supported_io_types": { 00:06:08.155 "abort": true, 00:06:08.155 "compare": false, 00:06:08.155 "compare_and_write": false, 00:06:08.155 "flush": true, 00:06:08.155 "nvme_admin": false, 00:06:08.155 "nvme_io": false, 00:06:08.155 "read": true, 00:06:08.155 "reset": true, 00:06:08.155 "unmap": true, 00:06:08.155 "write": true, 00:06:08.155 "write_zeroes": true 00:06:08.155 }, 00:06:08.155 "uuid": "a3ec5da1-c930-5d36-b14a-a35a81437b37", 00:06:08.155 "zoned": false 00:06:08.155 } 00:06:08.155 ]' 00:06:08.155 10:06:27 -- rpc/rpc.sh@21 -- # jq length 00:06:08.155 10:06:27 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:08.155 10:06:27 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:08.155 10:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.155 10:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:08.155 10:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.155 10:06:27 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:06:08.155 10:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.155 10:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:08.155 10:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.155 10:06:27 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:08.155 10:06:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.155 10:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:08.155 10:06:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.155 10:06:27 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:08.155 10:06:27 -- rpc/rpc.sh@26 -- # jq length 00:06:08.413 10:06:27 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:08.413 00:06:08.413 real 0m0.317s 00:06:08.413 user 0m0.218s 00:06:08.413 sys 0m0.032s 00:06:08.413 10:06:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.413 10:06:27 -- common/autotest_common.sh@10 -- # set +x 00:06:08.413 ************************************ 00:06:08.413 END TEST rpc_daemon_integrity 00:06:08.413 ************************************ 00:06:08.413 10:06:27 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:08.413 10:06:27 -- rpc/rpc.sh@84 -- # killprocess 67346 00:06:08.413 10:06:27 -- common/autotest_common.sh@936 -- # '[' -z 67346 ']' 00:06:08.413 10:06:27 -- common/autotest_common.sh@940 -- # kill -0 67346 00:06:08.413 10:06:27 -- common/autotest_common.sh@941 -- # uname 00:06:08.413 10:06:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.413 10:06:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67346 00:06:08.413 10:06:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:08.413 10:06:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:08.413 10:06:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67346' 00:06:08.413 killing process with pid 67346 00:06:08.413 10:06:27 -- common/autotest_common.sh@955 -- # kill 67346 00:06:08.413 10:06:27 -- common/autotest_common.sh@960 -- # wait 67346 00:06:08.671 00:06:08.671 real 0m3.066s 00:06:08.671 user 0m4.211s 00:06:08.671 sys 0m0.647s 00:06:08.671 10:06:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.671 10:06:28 -- common/autotest_common.sh@10 -- # set +x 00:06:08.671 ************************************ 00:06:08.671 END TEST rpc 00:06:08.671 ************************************ 00:06:08.671 10:06:28 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:08.671 10:06:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.671 10:06:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.671 10:06:28 -- common/autotest_common.sh@10 -- # set +x 00:06:08.671 ************************************ 00:06:08.671 START TEST rpc_client 00:06:08.671 ************************************ 00:06:08.671 10:06:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:08.671 * Looking for test storage... 00:06:08.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:08.671 10:06:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:08.671 10:06:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:08.671 10:06:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:08.929 10:06:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:08.929 10:06:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:08.929 10:06:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:08.929 10:06:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:08.929 10:06:28 -- scripts/common.sh@335 -- # IFS=.-: 00:06:08.929 10:06:28 -- scripts/common.sh@335 -- # read -ra ver1 00:06:08.929 10:06:28 -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.929 10:06:28 -- scripts/common.sh@336 -- # read -ra ver2 00:06:08.929 10:06:28 -- scripts/common.sh@337 -- # local 'op=<' 00:06:08.929 10:06:28 -- scripts/common.sh@339 -- # ver1_l=2 00:06:08.929 10:06:28 -- scripts/common.sh@340 -- # ver2_l=1 00:06:08.929 10:06:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:08.929 10:06:28 -- scripts/common.sh@343 -- # case "$op" in 00:06:08.929 10:06:28 -- scripts/common.sh@344 -- # : 1 00:06:08.929 10:06:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:08.929 10:06:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.929 10:06:28 -- scripts/common.sh@364 -- # decimal 1 00:06:08.929 10:06:28 -- scripts/common.sh@352 -- # local d=1 00:06:08.929 10:06:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.929 10:06:28 -- scripts/common.sh@354 -- # echo 1 00:06:08.929 10:06:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:08.929 10:06:28 -- scripts/common.sh@365 -- # decimal 2 00:06:08.929 10:06:28 -- scripts/common.sh@352 -- # local d=2 00:06:08.929 10:06:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.929 10:06:28 -- scripts/common.sh@354 -- # echo 2 00:06:08.929 10:06:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:08.929 10:06:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:08.929 10:06:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:08.930 10:06:28 -- scripts/common.sh@367 -- # return 0 00:06:08.930 10:06:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.930 10:06:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:08.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.930 --rc genhtml_branch_coverage=1 00:06:08.930 --rc genhtml_function_coverage=1 00:06:08.930 --rc genhtml_legend=1 00:06:08.930 --rc geninfo_all_blocks=1 00:06:08.930 --rc geninfo_unexecuted_blocks=1 00:06:08.930 00:06:08.930 ' 00:06:08.930 10:06:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:08.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.930 --rc genhtml_branch_coverage=1 00:06:08.930 --rc genhtml_function_coverage=1 00:06:08.930 --rc genhtml_legend=1 00:06:08.930 --rc geninfo_all_blocks=1 00:06:08.930 --rc geninfo_unexecuted_blocks=1 00:06:08.930 00:06:08.930 ' 00:06:08.930 10:06:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:08.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.930 --rc genhtml_branch_coverage=1 00:06:08.930 --rc genhtml_function_coverage=1 00:06:08.930 --rc genhtml_legend=1 00:06:08.930 --rc geninfo_all_blocks=1 00:06:08.930 --rc geninfo_unexecuted_blocks=1 00:06:08.930 00:06:08.930 ' 00:06:08.930 10:06:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:08.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.930 --rc genhtml_branch_coverage=1 00:06:08.930 --rc genhtml_function_coverage=1 00:06:08.930 --rc genhtml_legend=1 00:06:08.930 --rc geninfo_all_blocks=1 00:06:08.930 --rc geninfo_unexecuted_blocks=1 00:06:08.930 00:06:08.930 ' 00:06:08.930 10:06:28 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:08.930 OK 00:06:08.930 10:06:28 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:08.930 00:06:08.930 real 0m0.188s 00:06:08.930 user 0m0.110s 00:06:08.930 sys 0m0.091s 00:06:08.930 10:06:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.930 10:06:28 -- common/autotest_common.sh@10 -- # set +x 00:06:08.930 ************************************ 00:06:08.930 END TEST rpc_client 00:06:08.930 ************************************ 00:06:08.930 10:06:28 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:08.930 10:06:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.930 10:06:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.930 10:06:28 -- common/autotest_common.sh@10 -- # set +x 00:06:08.930 ************************************ 00:06:08.930 START TEST json_config 00:06:08.930 ************************************ 00:06:08.930 10:06:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:08.930 10:06:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:08.930 10:06:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:08.930 10:06:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:08.930 10:06:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:08.930 10:06:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:08.930 10:06:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:08.930 10:06:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:08.930 10:06:28 -- scripts/common.sh@335 -- # IFS=.-: 00:06:08.930 10:06:28 -- scripts/common.sh@335 -- # read -ra ver1 00:06:08.930 10:06:28 -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.930 10:06:28 -- scripts/common.sh@336 -- # read -ra ver2 00:06:08.930 10:06:28 -- scripts/common.sh@337 -- # local 'op=<' 00:06:08.930 10:06:28 -- scripts/common.sh@339 -- # ver1_l=2 00:06:08.930 10:06:28 -- scripts/common.sh@340 -- # ver2_l=1 00:06:08.930 10:06:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:08.930 10:06:28 -- scripts/common.sh@343 -- # case "$op" in 00:06:08.930 10:06:28 -- scripts/common.sh@344 -- # : 1 00:06:08.930 10:06:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:08.930 10:06:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.930 10:06:28 -- scripts/common.sh@364 -- # decimal 1 00:06:08.930 10:06:28 -- scripts/common.sh@352 -- # local d=1 00:06:08.930 10:06:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.930 10:06:28 -- scripts/common.sh@354 -- # echo 1 00:06:08.930 10:06:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:08.930 10:06:28 -- scripts/common.sh@365 -- # decimal 2 00:06:09.189 10:06:28 -- scripts/common.sh@352 -- # local d=2 00:06:09.189 10:06:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.189 10:06:28 -- scripts/common.sh@354 -- # echo 2 00:06:09.189 10:06:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:09.189 10:06:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:09.189 10:06:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:09.189 10:06:28 -- scripts/common.sh@367 -- # return 0 00:06:09.189 10:06:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.189 10:06:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:09.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.189 --rc genhtml_branch_coverage=1 00:06:09.189 --rc genhtml_function_coverage=1 00:06:09.189 --rc genhtml_legend=1 00:06:09.189 --rc geninfo_all_blocks=1 00:06:09.189 --rc geninfo_unexecuted_blocks=1 00:06:09.189 00:06:09.189 ' 00:06:09.189 10:06:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:09.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.189 --rc genhtml_branch_coverage=1 00:06:09.189 --rc genhtml_function_coverage=1 00:06:09.189 --rc genhtml_legend=1 00:06:09.189 --rc geninfo_all_blocks=1 00:06:09.189 --rc geninfo_unexecuted_blocks=1 00:06:09.189 00:06:09.189 ' 00:06:09.189 10:06:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:09.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.189 --rc genhtml_branch_coverage=1 00:06:09.189 --rc genhtml_function_coverage=1 00:06:09.189 --rc genhtml_legend=1 00:06:09.189 --rc geninfo_all_blocks=1 00:06:09.189 --rc geninfo_unexecuted_blocks=1 00:06:09.189 00:06:09.189 ' 00:06:09.189 10:06:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:09.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.189 --rc genhtml_branch_coverage=1 00:06:09.189 --rc genhtml_function_coverage=1 00:06:09.189 --rc genhtml_legend=1 00:06:09.189 --rc geninfo_all_blocks=1 00:06:09.189 --rc geninfo_unexecuted_blocks=1 00:06:09.189 00:06:09.189 ' 00:06:09.189 10:06:28 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:09.189 10:06:28 -- nvmf/common.sh@7 -- # uname -s 00:06:09.189 10:06:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.189 10:06:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.189 10:06:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.189 10:06:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.189 10:06:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.189 10:06:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.189 10:06:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.189 10:06:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.189 10:06:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.189 10:06:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.189 10:06:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:06:09.189 10:06:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:06:09.189 10:06:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.189 10:06:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.189 10:06:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:09.189 10:06:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:09.189 10:06:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.189 10:06:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.189 10:06:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.189 10:06:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.189 10:06:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.189 10:06:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.189 10:06:28 -- paths/export.sh@5 -- # export PATH 00:06:09.189 10:06:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.189 10:06:28 -- nvmf/common.sh@46 -- # : 0 00:06:09.189 10:06:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:09.189 10:06:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:09.190 10:06:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:09.190 10:06:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.190 10:06:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.190 10:06:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:09.190 10:06:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:09.190 10:06:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:09.190 10:06:28 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:06:09.190 10:06:28 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:06:09.190 10:06:28 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:06:09.190 10:06:28 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:09.190 10:06:28 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:06:09.190 10:06:28 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:06:09.190 INFO: JSON configuration test init 00:06:09.190 10:06:28 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:09.190 10:06:28 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:06:09.190 10:06:28 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:09.190 10:06:28 -- json_config/json_config.sh@32 -- # declare -A app_params 00:06:09.190 10:06:28 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:09.190 10:06:28 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:06:09.190 10:06:28 -- json_config/json_config.sh@43 -- # last_event_id=0 00:06:09.190 10:06:28 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:09.190 10:06:28 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:06:09.190 10:06:28 -- json_config/json_config.sh@420 -- # json_config_test_init 00:06:09.190 10:06:28 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:06:09.190 10:06:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:09.190 10:06:28 -- common/autotest_common.sh@10 -- # set +x 00:06:09.190 10:06:28 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:06:09.190 10:06:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:09.190 10:06:28 -- common/autotest_common.sh@10 -- # set +x 00:06:09.190 10:06:28 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:06:09.190 10:06:28 -- json_config/json_config.sh@98 -- # local app=target 00:06:09.190 10:06:28 -- json_config/json_config.sh@99 -- # shift 00:06:09.190 10:06:28 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:09.190 10:06:28 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:09.190 10:06:28 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:09.190 10:06:28 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:09.190 10:06:28 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:09.190 10:06:28 -- json_config/json_config.sh@111 -- # app_pid[$app]=67662 00:06:09.190 10:06:28 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:09.190 10:06:28 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:09.190 Waiting for target to run... 00:06:09.190 10:06:28 -- json_config/json_config.sh@114 -- # waitforlisten 67662 /var/tmp/spdk_tgt.sock 00:06:09.190 10:06:28 -- common/autotest_common.sh@829 -- # '[' -z 67662 ']' 00:06:09.190 10:06:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:09.190 10:06:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.190 10:06:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:09.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:09.190 10:06:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.190 10:06:28 -- common/autotest_common.sh@10 -- # set +x 00:06:09.190 [2024-11-19 10:06:28.573463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:09.190 [2024-11-19 10:06:28.573941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67662 ] 00:06:09.448 [2024-11-19 10:06:28.862305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.448 [2024-11-19 10:06:28.888599] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.448 [2024-11-19 10:06:28.888794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.383 00:06:10.383 10:06:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.383 10:06:29 -- common/autotest_common.sh@862 -- # return 0 00:06:10.383 10:06:29 -- json_config/json_config.sh@115 -- # echo '' 00:06:10.383 10:06:29 -- json_config/json_config.sh@322 -- # create_accel_config 00:06:10.383 10:06:29 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:06:10.383 10:06:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:10.383 10:06:29 -- common/autotest_common.sh@10 -- # set +x 00:06:10.383 10:06:29 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:06:10.383 10:06:29 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:06:10.383 10:06:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:10.383 10:06:29 -- common/autotest_common.sh@10 -- # set +x 00:06:10.383 10:06:29 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:10.383 10:06:29 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:06:10.383 10:06:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:10.642 10:06:30 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:06:10.642 10:06:30 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:06:10.642 10:06:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:10.642 10:06:30 -- common/autotest_common.sh@10 -- # set +x 00:06:10.642 10:06:30 -- json_config/json_config.sh@48 -- # local ret=0 00:06:10.642 10:06:30 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:10.642 10:06:30 -- json_config/json_config.sh@49 -- # local enabled_types 00:06:10.642 10:06:30 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:10.642 10:06:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:10.642 10:06:30 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:10.901 10:06:30 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:10.901 10:06:30 -- json_config/json_config.sh@51 -- # local get_types 00:06:10.901 10:06:30 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:10.901 10:06:30 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:06:10.901 10:06:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:10.901 10:06:30 -- common/autotest_common.sh@10 -- # set +x 00:06:10.901 10:06:30 -- json_config/json_config.sh@58 -- # return 0 00:06:10.901 10:06:30 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:06:10.901 10:06:30 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:06:10.901 10:06:30 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:06:10.901 10:06:30 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:06:10.901 10:06:30 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:06:10.901 10:06:30 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:06:10.901 10:06:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:10.901 10:06:30 -- common/autotest_common.sh@10 -- # set +x 00:06:10.901 10:06:30 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:10.901 10:06:30 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:06:10.901 10:06:30 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:06:10.901 10:06:30 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:10.901 10:06:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:11.467 MallocForNvmf0 00:06:11.467 10:06:30 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:11.467 10:06:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:11.467 MallocForNvmf1 00:06:11.726 10:06:31 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:11.726 10:06:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:11.985 [2024-11-19 10:06:31.277193] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.985 10:06:31 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:11.985 10:06:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:12.244 10:06:31 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:12.244 10:06:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:12.502 10:06:31 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:12.502 10:06:31 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:12.760 10:06:32 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:12.760 10:06:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:13.019 [2024-11-19 10:06:32.497948] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:13.019 10:06:32 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:06:13.019 10:06:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:13.019 10:06:32 -- common/autotest_common.sh@10 -- # set +x 00:06:13.019 10:06:32 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:06:13.019 10:06:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:13.019 10:06:32 -- common/autotest_common.sh@10 -- # set +x 00:06:13.277 10:06:32 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:06:13.277 10:06:32 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:13.277 10:06:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:13.535 MallocBdevForConfigChangeCheck 00:06:13.535 10:06:32 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:06:13.535 10:06:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:13.535 10:06:32 -- common/autotest_common.sh@10 -- # set +x 00:06:13.535 10:06:32 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:06:13.535 10:06:32 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:13.794 INFO: shutting down applications... 00:06:13.794 10:06:33 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:06:13.794 10:06:33 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:06:13.794 10:06:33 -- json_config/json_config.sh@431 -- # json_config_clear target 00:06:13.794 10:06:33 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:06:13.794 10:06:33 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:14.362 Calling clear_iscsi_subsystem 00:06:14.362 Calling clear_nvmf_subsystem 00:06:14.362 Calling clear_nbd_subsystem 00:06:14.362 Calling clear_ublk_subsystem 00:06:14.362 Calling clear_vhost_blk_subsystem 00:06:14.362 Calling clear_vhost_scsi_subsystem 00:06:14.362 Calling clear_scheduler_subsystem 00:06:14.362 Calling clear_bdev_subsystem 00:06:14.362 Calling clear_accel_subsystem 00:06:14.362 Calling clear_vmd_subsystem 00:06:14.362 Calling clear_sock_subsystem 00:06:14.362 Calling clear_iobuf_subsystem 00:06:14.362 10:06:33 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:14.362 10:06:33 -- json_config/json_config.sh@396 -- # count=100 00:06:14.362 10:06:33 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:06:14.362 10:06:33 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:14.362 10:06:33 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:14.362 10:06:33 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:14.621 10:06:34 -- json_config/json_config.sh@398 -- # break 00:06:14.621 10:06:34 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:06:14.621 10:06:34 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:06:14.621 10:06:34 -- json_config/json_config.sh@120 -- # local app=target 00:06:14.621 10:06:34 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:06:14.621 10:06:34 -- json_config/json_config.sh@124 -- # [[ -n 67662 ]] 00:06:14.621 10:06:34 -- json_config/json_config.sh@127 -- # kill -SIGINT 67662 00:06:14.621 10:06:34 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:06:14.621 10:06:34 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:14.621 10:06:34 -- json_config/json_config.sh@130 -- # kill -0 67662 00:06:14.621 10:06:34 -- json_config/json_config.sh@134 -- # sleep 0.5 00:06:15.188 10:06:34 -- json_config/json_config.sh@129 -- # (( i++ )) 00:06:15.188 10:06:34 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:15.188 10:06:34 -- json_config/json_config.sh@130 -- # kill -0 67662 00:06:15.188 10:06:34 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:06:15.188 10:06:34 -- json_config/json_config.sh@132 -- # break 00:06:15.188 10:06:34 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:06:15.188 SPDK target shutdown done 00:06:15.188 10:06:34 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:06:15.188 INFO: relaunching applications... 00:06:15.188 10:06:34 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:06:15.188 10:06:34 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:15.188 10:06:34 -- json_config/json_config.sh@98 -- # local app=target 00:06:15.188 10:06:34 -- json_config/json_config.sh@99 -- # shift 00:06:15.188 10:06:34 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:15.188 10:06:34 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:15.188 10:06:34 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:15.188 10:06:34 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:15.188 10:06:34 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:15.188 10:06:34 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:15.188 10:06:34 -- json_config/json_config.sh@111 -- # app_pid[$app]=67943 00:06:15.188 Waiting for target to run... 00:06:15.188 10:06:34 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:15.188 10:06:34 -- json_config/json_config.sh@114 -- # waitforlisten 67943 /var/tmp/spdk_tgt.sock 00:06:15.188 10:06:34 -- common/autotest_common.sh@829 -- # '[' -z 67943 ']' 00:06:15.188 10:06:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:15.188 10:06:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:15.188 10:06:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:15.188 10:06:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.188 10:06:34 -- common/autotest_common.sh@10 -- # set +x 00:06:15.188 [2024-11-19 10:06:34.664997] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:15.188 [2024-11-19 10:06:34.665099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67943 ] 00:06:15.446 [2024-11-19 10:06:34.964533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.446 [2024-11-19 10:06:34.988712] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:15.446 [2024-11-19 10:06:34.988935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.014 [2024-11-19 10:06:35.278238] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.014 [2024-11-19 10:06:35.310338] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:16.273 10:06:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.273 10:06:35 -- common/autotest_common.sh@862 -- # return 0 00:06:16.273 00:06:16.273 10:06:35 -- json_config/json_config.sh@115 -- # echo '' 00:06:16.273 10:06:35 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:06:16.273 10:06:35 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:16.273 INFO: Checking if target configuration is the same... 00:06:16.273 10:06:35 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:16.273 10:06:35 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:06:16.273 10:06:35 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:16.273 + '[' 2 -ne 2 ']' 00:06:16.273 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:16.273 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:16.273 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:16.273 +++ basename /dev/fd/62 00:06:16.273 ++ mktemp /tmp/62.XXX 00:06:16.273 + tmp_file_1=/tmp/62.Lns 00:06:16.273 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:16.273 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:16.273 + tmp_file_2=/tmp/spdk_tgt_config.json.9m6 00:06:16.273 + ret=0 00:06:16.273 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:16.840 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:16.840 + diff -u /tmp/62.Lns /tmp/spdk_tgt_config.json.9m6 00:06:16.840 INFO: JSON config files are the same 00:06:16.840 + echo 'INFO: JSON config files are the same' 00:06:16.840 + rm /tmp/62.Lns /tmp/spdk_tgt_config.json.9m6 00:06:16.840 + exit 0 00:06:16.840 10:06:36 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:06:16.840 INFO: changing configuration and checking if this can be detected... 00:06:16.840 10:06:36 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:16.840 10:06:36 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:16.840 10:06:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:17.098 10:06:36 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:06:17.098 10:06:36 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:17.098 10:06:36 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:17.099 + '[' 2 -ne 2 ']' 00:06:17.099 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:17.099 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:17.099 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:17.099 +++ basename /dev/fd/62 00:06:17.099 ++ mktemp /tmp/62.XXX 00:06:17.099 + tmp_file_1=/tmp/62.l08 00:06:17.099 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:17.099 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:17.099 + tmp_file_2=/tmp/spdk_tgt_config.json.zZC 00:06:17.099 + ret=0 00:06:17.099 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:17.666 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:17.666 + diff -u /tmp/62.l08 /tmp/spdk_tgt_config.json.zZC 00:06:17.666 + ret=1 00:06:17.666 + echo '=== Start of file: /tmp/62.l08 ===' 00:06:17.666 + cat /tmp/62.l08 00:06:17.666 + echo '=== End of file: /tmp/62.l08 ===' 00:06:17.666 + echo '' 00:06:17.666 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zZC ===' 00:06:17.666 + cat /tmp/spdk_tgt_config.json.zZC 00:06:17.666 + echo '=== End of file: /tmp/spdk_tgt_config.json.zZC ===' 00:06:17.666 + echo '' 00:06:17.666 + rm /tmp/62.l08 /tmp/spdk_tgt_config.json.zZC 00:06:17.666 + exit 1 00:06:17.666 INFO: configuration change detected. 00:06:17.666 10:06:37 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:17.666 10:06:37 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:17.666 10:06:37 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:17.666 10:06:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:17.666 10:06:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.666 10:06:37 -- json_config/json_config.sh@360 -- # local ret=0 00:06:17.666 10:06:37 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:17.666 10:06:37 -- json_config/json_config.sh@370 -- # [[ -n 67943 ]] 00:06:17.666 10:06:37 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:17.666 10:06:37 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:17.666 10:06:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:17.666 10:06:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.666 10:06:37 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:17.666 10:06:37 -- json_config/json_config.sh@246 -- # uname -s 00:06:17.666 10:06:37 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:17.666 10:06:37 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:17.666 10:06:37 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:17.666 10:06:37 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:17.666 10:06:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.666 10:06:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.666 10:06:37 -- json_config/json_config.sh@376 -- # killprocess 67943 00:06:17.666 10:06:37 -- common/autotest_common.sh@936 -- # '[' -z 67943 ']' 00:06:17.666 10:06:37 -- common/autotest_common.sh@940 -- # kill -0 67943 00:06:17.666 10:06:37 -- common/autotest_common.sh@941 -- # uname 00:06:17.666 10:06:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:17.666 10:06:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67943 00:06:17.666 10:06:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:17.666 10:06:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:17.666 killing process with pid 67943 00:06:17.666 10:06:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67943' 00:06:17.666 10:06:37 -- common/autotest_common.sh@955 -- # kill 67943 00:06:17.666 10:06:37 -- common/autotest_common.sh@960 -- # wait 67943 00:06:17.925 10:06:37 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:17.925 10:06:37 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:17.925 10:06:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:17.925 10:06:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.925 10:06:37 -- json_config/json_config.sh@381 -- # return 0 00:06:17.925 INFO: Success 00:06:17.925 10:06:37 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:17.925 00:06:17.925 real 0m8.948s 00:06:17.925 user 0m13.391s 00:06:17.925 sys 0m1.538s 00:06:17.925 10:06:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.925 10:06:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.925 ************************************ 00:06:17.925 END TEST json_config 00:06:17.925 ************************************ 00:06:17.925 10:06:37 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:17.925 10:06:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.925 10:06:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.925 10:06:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.925 ************************************ 00:06:17.925 START TEST json_config_extra_key 00:06:17.925 ************************************ 00:06:17.925 10:06:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:17.925 10:06:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:17.925 10:06:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:17.925 10:06:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:17.925 10:06:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:17.925 10:06:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:17.925 10:06:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:17.925 10:06:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:17.925 10:06:37 -- scripts/common.sh@335 -- # IFS=.-: 00:06:17.925 10:06:37 -- scripts/common.sh@335 -- # read -ra ver1 00:06:17.925 10:06:37 -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.925 10:06:37 -- scripts/common.sh@336 -- # read -ra ver2 00:06:17.925 10:06:37 -- scripts/common.sh@337 -- # local 'op=<' 00:06:17.925 10:06:37 -- scripts/common.sh@339 -- # ver1_l=2 00:06:17.925 10:06:37 -- scripts/common.sh@340 -- # ver2_l=1 00:06:17.925 10:06:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:17.925 10:06:37 -- scripts/common.sh@343 -- # case "$op" in 00:06:17.925 10:06:37 -- scripts/common.sh@344 -- # : 1 00:06:17.925 10:06:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:17.925 10:06:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.185 10:06:37 -- scripts/common.sh@364 -- # decimal 1 00:06:18.185 10:06:37 -- scripts/common.sh@352 -- # local d=1 00:06:18.185 10:06:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.185 10:06:37 -- scripts/common.sh@354 -- # echo 1 00:06:18.185 10:06:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:18.185 10:06:37 -- scripts/common.sh@365 -- # decimal 2 00:06:18.185 10:06:37 -- scripts/common.sh@352 -- # local d=2 00:06:18.185 10:06:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.185 10:06:37 -- scripts/common.sh@354 -- # echo 2 00:06:18.185 10:06:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:18.185 10:06:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:18.185 10:06:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:18.185 10:06:37 -- scripts/common.sh@367 -- # return 0 00:06:18.185 10:06:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.185 10:06:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:18.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.185 --rc genhtml_branch_coverage=1 00:06:18.185 --rc genhtml_function_coverage=1 00:06:18.185 --rc genhtml_legend=1 00:06:18.185 --rc geninfo_all_blocks=1 00:06:18.185 --rc geninfo_unexecuted_blocks=1 00:06:18.185 00:06:18.185 ' 00:06:18.185 10:06:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:18.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.185 --rc genhtml_branch_coverage=1 00:06:18.185 --rc genhtml_function_coverage=1 00:06:18.185 --rc genhtml_legend=1 00:06:18.185 --rc geninfo_all_blocks=1 00:06:18.185 --rc geninfo_unexecuted_blocks=1 00:06:18.185 00:06:18.185 ' 00:06:18.185 10:06:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:18.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.185 --rc genhtml_branch_coverage=1 00:06:18.185 --rc genhtml_function_coverage=1 00:06:18.185 --rc genhtml_legend=1 00:06:18.185 --rc geninfo_all_blocks=1 00:06:18.185 --rc geninfo_unexecuted_blocks=1 00:06:18.185 00:06:18.185 ' 00:06:18.185 10:06:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:18.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.185 --rc genhtml_branch_coverage=1 00:06:18.185 --rc genhtml_function_coverage=1 00:06:18.185 --rc genhtml_legend=1 00:06:18.185 --rc geninfo_all_blocks=1 00:06:18.185 --rc geninfo_unexecuted_blocks=1 00:06:18.185 00:06:18.185 ' 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:18.185 10:06:37 -- nvmf/common.sh@7 -- # uname -s 00:06:18.185 10:06:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.185 10:06:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.185 10:06:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.185 10:06:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.185 10:06:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.185 10:06:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.185 10:06:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.185 10:06:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.185 10:06:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.185 10:06:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.185 10:06:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:06:18.185 10:06:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:06:18.185 10:06:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.185 10:06:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.185 10:06:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:18.185 10:06:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.185 10:06:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.185 10:06:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.185 10:06:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.185 10:06:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.185 10:06:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.185 10:06:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.185 10:06:37 -- paths/export.sh@5 -- # export PATH 00:06:18.185 10:06:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.185 10:06:37 -- nvmf/common.sh@46 -- # : 0 00:06:18.185 10:06:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:18.185 10:06:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:18.185 10:06:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:18.185 10:06:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.185 10:06:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.185 10:06:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:18.185 10:06:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:18.185 10:06:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:18.185 INFO: launching applications... 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68126 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:18.185 Waiting for target to run... 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:18.185 10:06:37 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68126 /var/tmp/spdk_tgt.sock 00:06:18.185 10:06:37 -- common/autotest_common.sh@829 -- # '[' -z 68126 ']' 00:06:18.185 10:06:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:18.185 10:06:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:18.186 10:06:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:18.186 10:06:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.186 10:06:37 -- common/autotest_common.sh@10 -- # set +x 00:06:18.186 [2024-11-19 10:06:37.556247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:18.186 [2024-11-19 10:06:37.556334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68126 ] 00:06:18.444 [2024-11-19 10:06:37.837484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.444 [2024-11-19 10:06:37.863811] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.444 [2024-11-19 10:06:37.864009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.377 10:06:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.377 10:06:38 -- common/autotest_common.sh@862 -- # return 0 00:06:19.377 10:06:38 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:19.377 00:06:19.377 INFO: shutting down applications... 00:06:19.377 10:06:38 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:19.377 10:06:38 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:19.377 10:06:38 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:19.377 10:06:38 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:19.377 10:06:38 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68126 ]] 00:06:19.377 10:06:38 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68126 00:06:19.377 10:06:38 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:19.377 10:06:38 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:19.377 10:06:38 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68126 00:06:19.377 10:06:38 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:19.634 10:06:39 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:19.634 10:06:39 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:19.634 10:06:39 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68126 00:06:19.634 10:06:39 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:19.634 10:06:39 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:19.634 10:06:39 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:19.634 SPDK target shutdown done 00:06:19.634 10:06:39 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:19.634 Success 00:06:19.634 10:06:39 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:19.634 00:06:19.634 real 0m1.823s 00:06:19.634 user 0m1.798s 00:06:19.634 sys 0m0.300s 00:06:19.634 10:06:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.634 10:06:39 -- common/autotest_common.sh@10 -- # set +x 00:06:19.634 ************************************ 00:06:19.634 END TEST json_config_extra_key 00:06:19.634 ************************************ 00:06:19.893 10:06:39 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:19.893 10:06:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:19.893 10:06:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.893 10:06:39 -- common/autotest_common.sh@10 -- # set +x 00:06:19.893 ************************************ 00:06:19.893 START TEST alias_rpc 00:06:19.893 ************************************ 00:06:19.893 10:06:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:19.893 * Looking for test storage... 00:06:19.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:19.893 10:06:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:19.893 10:06:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:19.893 10:06:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:19.893 10:06:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:19.893 10:06:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:19.893 10:06:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:19.893 10:06:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:19.893 10:06:39 -- scripts/common.sh@335 -- # IFS=.-: 00:06:19.893 10:06:39 -- scripts/common.sh@335 -- # read -ra ver1 00:06:19.893 10:06:39 -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.893 10:06:39 -- scripts/common.sh@336 -- # read -ra ver2 00:06:19.893 10:06:39 -- scripts/common.sh@337 -- # local 'op=<' 00:06:19.893 10:06:39 -- scripts/common.sh@339 -- # ver1_l=2 00:06:19.893 10:06:39 -- scripts/common.sh@340 -- # ver2_l=1 00:06:19.893 10:06:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:19.893 10:06:39 -- scripts/common.sh@343 -- # case "$op" in 00:06:19.893 10:06:39 -- scripts/common.sh@344 -- # : 1 00:06:19.893 10:06:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:19.893 10:06:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.893 10:06:39 -- scripts/common.sh@364 -- # decimal 1 00:06:19.893 10:06:39 -- scripts/common.sh@352 -- # local d=1 00:06:19.893 10:06:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.893 10:06:39 -- scripts/common.sh@354 -- # echo 1 00:06:19.893 10:06:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:19.893 10:06:39 -- scripts/common.sh@365 -- # decimal 2 00:06:19.893 10:06:39 -- scripts/common.sh@352 -- # local d=2 00:06:19.893 10:06:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.893 10:06:39 -- scripts/common.sh@354 -- # echo 2 00:06:19.893 10:06:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:19.893 10:06:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:19.893 10:06:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:19.893 10:06:39 -- scripts/common.sh@367 -- # return 0 00:06:19.893 10:06:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.893 10:06:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:19.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.893 --rc genhtml_branch_coverage=1 00:06:19.893 --rc genhtml_function_coverage=1 00:06:19.893 --rc genhtml_legend=1 00:06:19.893 --rc geninfo_all_blocks=1 00:06:19.893 --rc geninfo_unexecuted_blocks=1 00:06:19.893 00:06:19.893 ' 00:06:19.893 10:06:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:19.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.893 --rc genhtml_branch_coverage=1 00:06:19.893 --rc genhtml_function_coverage=1 00:06:19.893 --rc genhtml_legend=1 00:06:19.893 --rc geninfo_all_blocks=1 00:06:19.893 --rc geninfo_unexecuted_blocks=1 00:06:19.893 00:06:19.893 ' 00:06:19.893 10:06:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:19.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.893 --rc genhtml_branch_coverage=1 00:06:19.893 --rc genhtml_function_coverage=1 00:06:19.893 --rc genhtml_legend=1 00:06:19.893 --rc geninfo_all_blocks=1 00:06:19.893 --rc geninfo_unexecuted_blocks=1 00:06:19.893 00:06:19.893 ' 00:06:19.893 10:06:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:19.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.893 --rc genhtml_branch_coverage=1 00:06:19.893 --rc genhtml_function_coverage=1 00:06:19.893 --rc genhtml_legend=1 00:06:19.893 --rc geninfo_all_blocks=1 00:06:19.893 --rc geninfo_unexecuted_blocks=1 00:06:19.893 00:06:19.893 ' 00:06:19.893 10:06:39 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:19.893 10:06:39 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68215 00:06:19.893 10:06:39 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:19.893 10:06:39 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68215 00:06:19.893 10:06:39 -- common/autotest_common.sh@829 -- # '[' -z 68215 ']' 00:06:19.893 10:06:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.893 10:06:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.893 10:06:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.893 10:06:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.893 10:06:39 -- common/autotest_common.sh@10 -- # set +x 00:06:19.893 [2024-11-19 10:06:39.434015] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:19.893 [2024-11-19 10:06:39.434125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68215 ] 00:06:20.150 [2024-11-19 10:06:39.574014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.150 [2024-11-19 10:06:39.622113] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:20.150 [2024-11-19 10:06:39.622315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.085 10:06:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.085 10:06:40 -- common/autotest_common.sh@862 -- # return 0 00:06:21.085 10:06:40 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:21.343 10:06:40 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68215 00:06:21.343 10:06:40 -- common/autotest_common.sh@936 -- # '[' -z 68215 ']' 00:06:21.343 10:06:40 -- common/autotest_common.sh@940 -- # kill -0 68215 00:06:21.343 10:06:40 -- common/autotest_common.sh@941 -- # uname 00:06:21.343 10:06:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:21.343 10:06:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68215 00:06:21.685 10:06:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:21.685 10:06:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:21.685 killing process with pid 68215 00:06:21.685 10:06:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68215' 00:06:21.685 10:06:40 -- common/autotest_common.sh@955 -- # kill 68215 00:06:21.685 10:06:40 -- common/autotest_common.sh@960 -- # wait 68215 00:06:21.685 00:06:21.685 real 0m1.934s 00:06:21.685 user 0m2.418s 00:06:21.685 sys 0m0.383s 00:06:21.685 10:06:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.685 10:06:41 -- common/autotest_common.sh@10 -- # set +x 00:06:21.685 ************************************ 00:06:21.685 END TEST alias_rpc 00:06:21.685 ************************************ 00:06:21.685 10:06:41 -- spdk/autotest.sh@169 -- # [[ 1 -eq 0 ]] 00:06:21.685 10:06:41 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:21.685 10:06:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:21.685 10:06:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.685 10:06:41 -- common/autotest_common.sh@10 -- # set +x 00:06:21.685 ************************************ 00:06:21.685 START TEST dpdk_mem_utility 00:06:21.685 ************************************ 00:06:21.685 10:06:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:21.955 * Looking for test storage... 00:06:21.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:21.955 10:06:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:21.955 10:06:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:21.955 10:06:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:21.955 10:06:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:21.955 10:06:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:21.955 10:06:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:21.955 10:06:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:21.955 10:06:41 -- scripts/common.sh@335 -- # IFS=.-: 00:06:21.955 10:06:41 -- scripts/common.sh@335 -- # read -ra ver1 00:06:21.955 10:06:41 -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.955 10:06:41 -- scripts/common.sh@336 -- # read -ra ver2 00:06:21.955 10:06:41 -- scripts/common.sh@337 -- # local 'op=<' 00:06:21.955 10:06:41 -- scripts/common.sh@339 -- # ver1_l=2 00:06:21.955 10:06:41 -- scripts/common.sh@340 -- # ver2_l=1 00:06:21.955 10:06:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:21.955 10:06:41 -- scripts/common.sh@343 -- # case "$op" in 00:06:21.955 10:06:41 -- scripts/common.sh@344 -- # : 1 00:06:21.955 10:06:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:21.955 10:06:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.955 10:06:41 -- scripts/common.sh@364 -- # decimal 1 00:06:21.955 10:06:41 -- scripts/common.sh@352 -- # local d=1 00:06:21.955 10:06:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.955 10:06:41 -- scripts/common.sh@354 -- # echo 1 00:06:21.955 10:06:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:21.955 10:06:41 -- scripts/common.sh@365 -- # decimal 2 00:06:21.955 10:06:41 -- scripts/common.sh@352 -- # local d=2 00:06:21.955 10:06:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.955 10:06:41 -- scripts/common.sh@354 -- # echo 2 00:06:21.955 10:06:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:21.955 10:06:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:21.955 10:06:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:21.955 10:06:41 -- scripts/common.sh@367 -- # return 0 00:06:21.955 10:06:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.955 10:06:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.955 --rc genhtml_branch_coverage=1 00:06:21.955 --rc genhtml_function_coverage=1 00:06:21.955 --rc genhtml_legend=1 00:06:21.955 --rc geninfo_all_blocks=1 00:06:21.955 --rc geninfo_unexecuted_blocks=1 00:06:21.955 00:06:21.955 ' 00:06:21.955 10:06:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.955 --rc genhtml_branch_coverage=1 00:06:21.955 --rc genhtml_function_coverage=1 00:06:21.955 --rc genhtml_legend=1 00:06:21.955 --rc geninfo_all_blocks=1 00:06:21.955 --rc geninfo_unexecuted_blocks=1 00:06:21.955 00:06:21.955 ' 00:06:21.955 10:06:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.955 --rc genhtml_branch_coverage=1 00:06:21.955 --rc genhtml_function_coverage=1 00:06:21.955 --rc genhtml_legend=1 00:06:21.955 --rc geninfo_all_blocks=1 00:06:21.955 --rc geninfo_unexecuted_blocks=1 00:06:21.955 00:06:21.955 ' 00:06:21.955 10:06:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.955 --rc genhtml_branch_coverage=1 00:06:21.955 --rc genhtml_function_coverage=1 00:06:21.955 --rc genhtml_legend=1 00:06:21.955 --rc geninfo_all_blocks=1 00:06:21.955 --rc geninfo_unexecuted_blocks=1 00:06:21.955 00:06:21.955 ' 00:06:21.955 10:06:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:21.955 10:06:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68313 00:06:21.955 10:06:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68313 00:06:21.955 10:06:41 -- common/autotest_common.sh@829 -- # '[' -z 68313 ']' 00:06:21.955 10:06:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.955 10:06:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.955 10:06:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.955 10:06:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.955 10:06:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.955 10:06:41 -- common/autotest_common.sh@10 -- # set +x 00:06:21.955 [2024-11-19 10:06:41.403475] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:21.955 [2024-11-19 10:06:41.403575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68313 ] 00:06:22.214 [2024-11-19 10:06:41.536054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.214 [2024-11-19 10:06:41.572165] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:22.214 [2024-11-19 10:06:41.572329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.152 10:06:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.152 10:06:42 -- common/autotest_common.sh@862 -- # return 0 00:06:23.152 10:06:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:23.152 10:06:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:23.152 10:06:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.152 10:06:42 -- common/autotest_common.sh@10 -- # set +x 00:06:23.152 { 00:06:23.152 "filename": "/tmp/spdk_mem_dump.txt" 00:06:23.152 } 00:06:23.152 10:06:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.152 10:06:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:23.152 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:23.152 1 heaps totaling size 814.000000 MiB 00:06:23.152 size: 814.000000 MiB heap id: 0 00:06:23.152 end heaps---------- 00:06:23.152 8 mempools totaling size 598.116089 MiB 00:06:23.152 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:23.152 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:23.152 size: 84.521057 MiB name: bdev_io_68313 00:06:23.152 size: 51.011292 MiB name: evtpool_68313 00:06:23.152 size: 50.003479 MiB name: msgpool_68313 00:06:23.152 size: 21.763794 MiB name: PDU_Pool 00:06:23.152 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:23.152 size: 0.026123 MiB name: Session_Pool 00:06:23.152 end mempools------- 00:06:23.152 6 memzones totaling size 4.142822 MiB 00:06:23.152 size: 1.000366 MiB name: RG_ring_0_68313 00:06:23.152 size: 1.000366 MiB name: RG_ring_1_68313 00:06:23.152 size: 1.000366 MiB name: RG_ring_4_68313 00:06:23.152 size: 1.000366 MiB name: RG_ring_5_68313 00:06:23.152 size: 0.125366 MiB name: RG_ring_2_68313 00:06:23.152 size: 0.015991 MiB name: RG_ring_3_68313 00:06:23.152 end memzones------- 00:06:23.152 10:06:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:23.152 heap id: 0 total size: 814.000000 MiB number of busy elements: 224 number of free elements: 15 00:06:23.152 list of free elements. size: 12.485840 MiB 00:06:23.152 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:23.152 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:23.152 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:23.152 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:23.152 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:23.152 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:23.152 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:23.152 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:23.152 element at address: 0x200000200000 with size: 0.837219 MiB 00:06:23.152 element at address: 0x20001aa00000 with size: 0.571899 MiB 00:06:23.152 element at address: 0x20000b200000 with size: 0.489258 MiB 00:06:23.152 element at address: 0x200000800000 with size: 0.486877 MiB 00:06:23.152 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:23.152 element at address: 0x200027e00000 with size: 0.398132 MiB 00:06:23.152 element at address: 0x200003a00000 with size: 0.351685 MiB 00:06:23.152 list of standard malloc elements. size: 199.251587 MiB 00:06:23.152 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:23.152 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:23.152 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:23.152 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:23.152 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:23.152 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:23.152 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:23.152 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:23.152 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:23.152 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:23.152 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:23.152 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:23.152 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:23.152 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:23.152 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:23.152 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:23.152 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:23.152 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:23.152 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:23.152 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:23.152 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:23.152 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:23.152 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:23.152 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:23.152 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:23.153 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:23.153 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:23.153 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:23.153 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6cb80 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:23.153 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:23.154 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:23.154 list of memzone associated elements. size: 602.262573 MiB 00:06:23.154 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:23.154 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:23.154 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:23.154 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:23.154 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:23.154 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68313_0 00:06:23.154 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:23.154 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68313_0 00:06:23.154 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:23.154 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68313_0 00:06:23.154 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:23.154 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:23.154 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:23.154 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:23.154 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:23.154 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68313 00:06:23.154 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:23.154 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68313 00:06:23.154 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:23.154 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68313 00:06:23.154 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:23.154 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:23.154 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:23.154 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:23.154 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:23.154 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:23.154 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:23.154 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:23.154 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:23.154 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68313 00:06:23.154 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:23.154 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68313 00:06:23.154 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:23.154 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68313 00:06:23.154 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:23.154 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68313 00:06:23.154 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:23.154 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68313 00:06:23.154 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:23.154 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:23.154 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:23.154 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:23.154 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:23.154 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:23.154 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:23.154 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68313 00:06:23.154 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:23.154 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:23.154 element at address: 0x200027e66040 with size: 0.023743 MiB 00:06:23.154 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:23.154 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:23.154 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68313 00:06:23.154 element at address: 0x200027e6c180 with size: 0.002441 MiB 00:06:23.154 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:23.154 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:06:23.154 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68313 00:06:23.154 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:23.154 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68313 00:06:23.154 element at address: 0x200027e6cc40 with size: 0.000305 MiB 00:06:23.154 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:23.154 10:06:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:23.154 10:06:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68313 00:06:23.154 10:06:42 -- common/autotest_common.sh@936 -- # '[' -z 68313 ']' 00:06:23.154 10:06:42 -- common/autotest_common.sh@940 -- # kill -0 68313 00:06:23.154 10:06:42 -- common/autotest_common.sh@941 -- # uname 00:06:23.154 10:06:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:23.154 10:06:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68313 00:06:23.154 10:06:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:23.154 killing process with pid 68313 00:06:23.154 10:06:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:23.154 10:06:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68313' 00:06:23.154 10:06:42 -- common/autotest_common.sh@955 -- # kill 68313 00:06:23.154 10:06:42 -- common/autotest_common.sh@960 -- # wait 68313 00:06:23.414 00:06:23.414 real 0m1.698s 00:06:23.414 user 0m2.013s 00:06:23.414 sys 0m0.353s 00:06:23.414 10:06:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.414 10:06:42 -- common/autotest_common.sh@10 -- # set +x 00:06:23.414 ************************************ 00:06:23.414 END TEST dpdk_mem_utility 00:06:23.414 ************************************ 00:06:23.414 10:06:42 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:23.414 10:06:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:23.414 10:06:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.414 10:06:42 -- common/autotest_common.sh@10 -- # set +x 00:06:23.414 ************************************ 00:06:23.414 START TEST event 00:06:23.414 ************************************ 00:06:23.414 10:06:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:23.673 * Looking for test storage... 00:06:23.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:23.673 10:06:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:23.673 10:06:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:23.673 10:06:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:23.673 10:06:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:23.673 10:06:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:23.673 10:06:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:23.673 10:06:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:23.673 10:06:43 -- scripts/common.sh@335 -- # IFS=.-: 00:06:23.673 10:06:43 -- scripts/common.sh@335 -- # read -ra ver1 00:06:23.673 10:06:43 -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.673 10:06:43 -- scripts/common.sh@336 -- # read -ra ver2 00:06:23.673 10:06:43 -- scripts/common.sh@337 -- # local 'op=<' 00:06:23.673 10:06:43 -- scripts/common.sh@339 -- # ver1_l=2 00:06:23.673 10:06:43 -- scripts/common.sh@340 -- # ver2_l=1 00:06:23.673 10:06:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:23.673 10:06:43 -- scripts/common.sh@343 -- # case "$op" in 00:06:23.673 10:06:43 -- scripts/common.sh@344 -- # : 1 00:06:23.673 10:06:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:23.673 10:06:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.673 10:06:43 -- scripts/common.sh@364 -- # decimal 1 00:06:23.673 10:06:43 -- scripts/common.sh@352 -- # local d=1 00:06:23.673 10:06:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.673 10:06:43 -- scripts/common.sh@354 -- # echo 1 00:06:23.673 10:06:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:23.673 10:06:43 -- scripts/common.sh@365 -- # decimal 2 00:06:23.673 10:06:43 -- scripts/common.sh@352 -- # local d=2 00:06:23.673 10:06:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.673 10:06:43 -- scripts/common.sh@354 -- # echo 2 00:06:23.673 10:06:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:23.673 10:06:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:23.673 10:06:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:23.673 10:06:43 -- scripts/common.sh@367 -- # return 0 00:06:23.673 10:06:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.673 10:06:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:23.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.673 --rc genhtml_branch_coverage=1 00:06:23.673 --rc genhtml_function_coverage=1 00:06:23.673 --rc genhtml_legend=1 00:06:23.673 --rc geninfo_all_blocks=1 00:06:23.673 --rc geninfo_unexecuted_blocks=1 00:06:23.673 00:06:23.673 ' 00:06:23.673 10:06:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:23.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.673 --rc genhtml_branch_coverage=1 00:06:23.673 --rc genhtml_function_coverage=1 00:06:23.673 --rc genhtml_legend=1 00:06:23.673 --rc geninfo_all_blocks=1 00:06:23.673 --rc geninfo_unexecuted_blocks=1 00:06:23.673 00:06:23.673 ' 00:06:23.673 10:06:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:23.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.673 --rc genhtml_branch_coverage=1 00:06:23.673 --rc genhtml_function_coverage=1 00:06:23.673 --rc genhtml_legend=1 00:06:23.673 --rc geninfo_all_blocks=1 00:06:23.673 --rc geninfo_unexecuted_blocks=1 00:06:23.673 00:06:23.673 ' 00:06:23.673 10:06:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:23.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.673 --rc genhtml_branch_coverage=1 00:06:23.673 --rc genhtml_function_coverage=1 00:06:23.673 --rc genhtml_legend=1 00:06:23.673 --rc geninfo_all_blocks=1 00:06:23.673 --rc geninfo_unexecuted_blocks=1 00:06:23.673 00:06:23.673 ' 00:06:23.673 10:06:43 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:23.673 10:06:43 -- bdev/nbd_common.sh@6 -- # set -e 00:06:23.673 10:06:43 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:23.673 10:06:43 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:23.673 10:06:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.673 10:06:43 -- common/autotest_common.sh@10 -- # set +x 00:06:23.673 ************************************ 00:06:23.673 START TEST event_perf 00:06:23.673 ************************************ 00:06:23.673 10:06:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:23.673 Running I/O for 1 seconds...[2024-11-19 10:06:43.131131] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:23.674 [2024-11-19 10:06:43.131980] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68405 ] 00:06:23.932 [2024-11-19 10:06:43.269228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.932 [2024-11-19 10:06:43.306400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.932 [2024-11-19 10:06:43.306482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.932 [2024-11-19 10:06:43.306620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.932 [2024-11-19 10:06:43.306624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.864 Running I/O for 1 seconds... 00:06:24.864 lcore 0: 187447 00:06:24.864 lcore 1: 187446 00:06:24.864 lcore 2: 187446 00:06:24.864 lcore 3: 187446 00:06:24.864 done. 00:06:24.864 00:06:24.864 real 0m1.246s 00:06:24.864 user 0m4.083s 00:06:24.865 sys 0m0.042s 00:06:24.865 10:06:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.865 10:06:44 -- common/autotest_common.sh@10 -- # set +x 00:06:24.865 ************************************ 00:06:24.865 END TEST event_perf 00:06:24.865 ************************************ 00:06:24.865 10:06:44 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:24.865 10:06:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:24.865 10:06:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.865 10:06:44 -- common/autotest_common.sh@10 -- # set +x 00:06:25.123 ************************************ 00:06:25.123 START TEST event_reactor 00:06:25.123 ************************************ 00:06:25.123 10:06:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:25.123 [2024-11-19 10:06:44.435564] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:25.123 [2024-11-19 10:06:44.435653] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68444 ] 00:06:25.123 [2024-11-19 10:06:44.571705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.123 [2024-11-19 10:06:44.609748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.498 test_start 00:06:26.498 oneshot 00:06:26.498 tick 100 00:06:26.498 tick 100 00:06:26.498 tick 250 00:06:26.498 tick 100 00:06:26.498 tick 100 00:06:26.498 tick 100 00:06:26.498 tick 250 00:06:26.498 tick 500 00:06:26.498 tick 100 00:06:26.498 tick 100 00:06:26.498 tick 250 00:06:26.498 tick 100 00:06:26.498 tick 100 00:06:26.498 test_end 00:06:26.498 00:06:26.498 real 0m1.252s 00:06:26.498 user 0m1.106s 00:06:26.498 sys 0m0.039s 00:06:26.498 10:06:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.498 10:06:45 -- common/autotest_common.sh@10 -- # set +x 00:06:26.498 ************************************ 00:06:26.498 END TEST event_reactor 00:06:26.498 ************************************ 00:06:26.498 10:06:45 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:26.498 10:06:45 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:26.498 10:06:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.498 10:06:45 -- common/autotest_common.sh@10 -- # set +x 00:06:26.499 ************************************ 00:06:26.499 START TEST event_reactor_perf 00:06:26.499 ************************************ 00:06:26.499 10:06:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:26.499 [2024-11-19 10:06:45.736317] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:26.499 [2024-11-19 10:06:45.736481] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68479 ] 00:06:26.499 [2024-11-19 10:06:45.879814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.499 [2024-11-19 10:06:45.915646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.434 test_start 00:06:27.434 test_end 00:06:27.435 Performance: 349496 events per second 00:06:27.435 00:06:27.435 real 0m1.250s 00:06:27.435 user 0m1.094s 00:06:27.435 sys 0m0.051s 00:06:27.435 10:06:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.435 ************************************ 00:06:27.435 END TEST event_reactor_perf 00:06:27.435 ************************************ 00:06:27.435 10:06:46 -- common/autotest_common.sh@10 -- # set +x 00:06:27.693 10:06:47 -- event/event.sh@49 -- # uname -s 00:06:27.693 10:06:47 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:27.693 10:06:47 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:27.693 10:06:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.693 10:06:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.693 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:27.693 ************************************ 00:06:27.693 START TEST event_scheduler 00:06:27.693 ************************************ 00:06:27.693 10:06:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:27.693 * Looking for test storage... 00:06:27.693 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:27.693 10:06:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:27.693 10:06:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:27.693 10:06:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:27.693 10:06:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:27.693 10:06:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:27.693 10:06:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:27.693 10:06:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:27.693 10:06:47 -- scripts/common.sh@335 -- # IFS=.-: 00:06:27.693 10:06:47 -- scripts/common.sh@335 -- # read -ra ver1 00:06:27.693 10:06:47 -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.693 10:06:47 -- scripts/common.sh@336 -- # read -ra ver2 00:06:27.693 10:06:47 -- scripts/common.sh@337 -- # local 'op=<' 00:06:27.693 10:06:47 -- scripts/common.sh@339 -- # ver1_l=2 00:06:27.693 10:06:47 -- scripts/common.sh@340 -- # ver2_l=1 00:06:27.693 10:06:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:27.693 10:06:47 -- scripts/common.sh@343 -- # case "$op" in 00:06:27.693 10:06:47 -- scripts/common.sh@344 -- # : 1 00:06:27.693 10:06:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:27.693 10:06:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.693 10:06:47 -- scripts/common.sh@364 -- # decimal 1 00:06:27.693 10:06:47 -- scripts/common.sh@352 -- # local d=1 00:06:27.693 10:06:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.693 10:06:47 -- scripts/common.sh@354 -- # echo 1 00:06:27.693 10:06:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:27.693 10:06:47 -- scripts/common.sh@365 -- # decimal 2 00:06:27.693 10:06:47 -- scripts/common.sh@352 -- # local d=2 00:06:27.693 10:06:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.693 10:06:47 -- scripts/common.sh@354 -- # echo 2 00:06:27.693 10:06:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:27.693 10:06:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:27.693 10:06:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:27.693 10:06:47 -- scripts/common.sh@367 -- # return 0 00:06:27.693 10:06:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.693 10:06:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:27.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.693 --rc genhtml_branch_coverage=1 00:06:27.693 --rc genhtml_function_coverage=1 00:06:27.693 --rc genhtml_legend=1 00:06:27.693 --rc geninfo_all_blocks=1 00:06:27.693 --rc geninfo_unexecuted_blocks=1 00:06:27.693 00:06:27.693 ' 00:06:27.693 10:06:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:27.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.693 --rc genhtml_branch_coverage=1 00:06:27.693 --rc genhtml_function_coverage=1 00:06:27.693 --rc genhtml_legend=1 00:06:27.693 --rc geninfo_all_blocks=1 00:06:27.693 --rc geninfo_unexecuted_blocks=1 00:06:27.693 00:06:27.693 ' 00:06:27.693 10:06:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:27.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.694 --rc genhtml_branch_coverage=1 00:06:27.694 --rc genhtml_function_coverage=1 00:06:27.694 --rc genhtml_legend=1 00:06:27.694 --rc geninfo_all_blocks=1 00:06:27.694 --rc geninfo_unexecuted_blocks=1 00:06:27.694 00:06:27.694 ' 00:06:27.694 10:06:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:27.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.694 --rc genhtml_branch_coverage=1 00:06:27.694 --rc genhtml_function_coverage=1 00:06:27.694 --rc genhtml_legend=1 00:06:27.694 --rc geninfo_all_blocks=1 00:06:27.694 --rc geninfo_unexecuted_blocks=1 00:06:27.694 00:06:27.694 ' 00:06:27.694 10:06:47 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:27.694 10:06:47 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68542 00:06:27.694 10:06:47 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.694 10:06:47 -- scheduler/scheduler.sh@37 -- # waitforlisten 68542 00:06:27.694 10:06:47 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:27.694 10:06:47 -- common/autotest_common.sh@829 -- # '[' -z 68542 ']' 00:06:27.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.694 10:06:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.694 10:06:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.694 10:06:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.694 10:06:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.694 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:27.952 [2024-11-19 10:06:47.256543] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:27.952 [2024-11-19 10:06:47.256641] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68542 ] 00:06:27.952 [2024-11-19 10:06:47.397096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.952 [2024-11-19 10:06:47.440736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.952 [2024-11-19 10:06:47.440867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.952 [2024-11-19 10:06:47.440965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.952 [2024-11-19 10:06:47.440969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.211 10:06:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.211 10:06:47 -- common/autotest_common.sh@862 -- # return 0 00:06:28.211 10:06:47 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:28.211 10:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.211 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.211 POWER: Env isn't set yet! 00:06:28.211 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:28.211 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:28.211 POWER: Cannot set governor of lcore 0 to userspace 00:06:28.211 POWER: Attempting to initialise PSTAT power management... 00:06:28.211 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:28.211 POWER: Cannot set governor of lcore 0 to performance 00:06:28.211 POWER: Attempting to initialise CPPC power management... 00:06:28.211 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:28.211 POWER: Cannot set governor of lcore 0 to userspace 00:06:28.211 POWER: Attempting to initialise VM power management... 00:06:28.211 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:28.211 POWER: Unable to set Power Management Environment for lcore 0 00:06:28.211 [2024-11-19 10:06:47.516650] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:28.211 [2024-11-19 10:06:47.516793] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:28.211 [2024-11-19 10:06:47.516918] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:28.211 [2024-11-19 10:06:47.517041] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:28.211 [2024-11-19 10:06:47.517091] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:28.211 [2024-11-19 10:06:47.517153] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:28.211 10:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.211 10:06:47 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:28.211 10:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.211 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.211 [2024-11-19 10:06:47.573038] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:28.211 10:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.211 10:06:47 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:28.211 10:06:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.211 10:06:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.211 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.211 ************************************ 00:06:28.211 START TEST scheduler_create_thread 00:06:28.211 ************************************ 00:06:28.211 10:06:47 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:28.211 10:06:47 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:28.211 10:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.211 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.211 2 00:06:28.211 10:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.211 10:06:47 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:28.211 10:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.211 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.211 3 00:06:28.211 10:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.211 10:06:47 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:28.211 10:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.211 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.211 4 00:06:28.211 10:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.211 10:06:47 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:28.211 10:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.211 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.211 5 00:06:28.211 10:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.211 10:06:47 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:28.211 10:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.211 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.211 6 00:06:28.211 10:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.211 10:06:47 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:28.211 10:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.211 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.211 7 00:06:28.211 10:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.211 10:06:47 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:28.211 10:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.211 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.211 8 00:06:28.211 10:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.211 10:06:47 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:28.211 10:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.211 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.211 9 00:06:28.211 10:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.211 10:06:47 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:28.211 10:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.211 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.211 10 00:06:28.211 10:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.211 10:06:47 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:28.212 10:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.212 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.212 10:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.212 10:06:47 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:28.212 10:06:47 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:28.212 10:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.212 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:28.212 10:06:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.212 10:06:47 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:28.212 10:06:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.212 10:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:30.143 10:06:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.143 10:06:49 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:30.143 10:06:49 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:30.143 10:06:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.143 10:06:49 -- common/autotest_common.sh@10 -- # set +x 00:06:30.710 ************************************ 00:06:30.710 END TEST scheduler_create_thread 00:06:30.710 ************************************ 00:06:30.710 10:06:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.710 00:06:30.710 real 0m2.617s 00:06:30.710 user 0m0.017s 00:06:30.710 sys 0m0.009s 00:06:30.710 10:06:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.710 10:06:50 -- common/autotest_common.sh@10 -- # set +x 00:06:30.710 10:06:50 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:30.710 10:06:50 -- scheduler/scheduler.sh@46 -- # killprocess 68542 00:06:30.710 10:06:50 -- common/autotest_common.sh@936 -- # '[' -z 68542 ']' 00:06:30.710 10:06:50 -- common/autotest_common.sh@940 -- # kill -0 68542 00:06:30.710 10:06:50 -- common/autotest_common.sh@941 -- # uname 00:06:30.710 10:06:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:30.710 10:06:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68542 00:06:30.968 killing process with pid 68542 00:06:30.968 10:06:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:30.968 10:06:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:30.968 10:06:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68542' 00:06:30.969 10:06:50 -- common/autotest_common.sh@955 -- # kill 68542 00:06:30.969 10:06:50 -- common/autotest_common.sh@960 -- # wait 68542 00:06:31.227 [2024-11-19 10:06:50.680925] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:31.486 ************************************ 00:06:31.486 END TEST event_scheduler 00:06:31.486 ************************************ 00:06:31.486 00:06:31.486 real 0m3.808s 00:06:31.486 user 0m5.694s 00:06:31.486 sys 0m0.296s 00:06:31.486 10:06:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:31.486 10:06:50 -- common/autotest_common.sh@10 -- # set +x 00:06:31.486 10:06:50 -- event/event.sh@51 -- # modprobe -n nbd 00:06:31.486 10:06:50 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:31.486 10:06:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.486 10:06:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.486 10:06:50 -- common/autotest_common.sh@10 -- # set +x 00:06:31.486 ************************************ 00:06:31.486 START TEST app_repeat 00:06:31.486 ************************************ 00:06:31.486 10:06:50 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:31.486 10:06:50 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.486 10:06:50 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.486 10:06:50 -- event/event.sh@13 -- # local nbd_list 00:06:31.486 10:06:50 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.486 10:06:50 -- event/event.sh@14 -- # local bdev_list 00:06:31.486 10:06:50 -- event/event.sh@15 -- # local repeat_times=4 00:06:31.486 10:06:50 -- event/event.sh@17 -- # modprobe nbd 00:06:31.486 10:06:50 -- event/event.sh@19 -- # repeat_pid=68646 00:06:31.486 10:06:50 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.486 10:06:50 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:31.486 Process app_repeat pid: 68646 00:06:31.486 spdk_app_start Round 0 00:06:31.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.486 10:06:50 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68646' 00:06:31.486 10:06:50 -- event/event.sh@23 -- # for i in {0..2} 00:06:31.486 10:06:50 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:31.486 10:06:50 -- event/event.sh@25 -- # waitforlisten 68646 /var/tmp/spdk-nbd.sock 00:06:31.486 10:06:50 -- common/autotest_common.sh@829 -- # '[' -z 68646 ']' 00:06:31.486 10:06:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.486 10:06:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.486 10:06:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.486 10:06:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.486 10:06:50 -- common/autotest_common.sh@10 -- # set +x 00:06:31.486 [2024-11-19 10:06:50.916062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:31.487 [2024-11-19 10:06:50.916336] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68646 ] 00:06:31.746 [2024-11-19 10:06:51.051854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.746 [2024-11-19 10:06:51.086806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.746 [2024-11-19 10:06:51.086814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.746 10:06:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.746 10:06:51 -- common/autotest_common.sh@862 -- # return 0 00:06:31.746 10:06:51 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.005 Malloc0 00:06:32.005 10:06:51 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.265 Malloc1 00:06:32.265 10:06:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.265 10:06:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.265 10:06:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.265 10:06:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.265 10:06:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.265 10:06:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.265 10:06:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.265 10:06:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.265 10:06:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.265 10:06:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.265 10:06:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.265 10:06:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.265 10:06:51 -- bdev/nbd_common.sh@12 -- # local i 00:06:32.265 10:06:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.265 10:06:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.265 10:06:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.832 /dev/nbd0 00:06:32.832 10:06:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.832 10:06:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.832 10:06:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:32.832 10:06:52 -- common/autotest_common.sh@867 -- # local i 00:06:32.832 10:06:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:32.832 10:06:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:32.832 10:06:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:32.832 10:06:52 -- common/autotest_common.sh@871 -- # break 00:06:32.832 10:06:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:32.832 10:06:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:32.832 10:06:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.832 1+0 records in 00:06:32.832 1+0 records out 00:06:32.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236036 s, 17.4 MB/s 00:06:32.832 10:06:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.832 10:06:52 -- common/autotest_common.sh@884 -- # size=4096 00:06:32.832 10:06:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.832 10:06:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:32.832 10:06:52 -- common/autotest_common.sh@887 -- # return 0 00:06:32.832 10:06:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.832 10:06:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.832 10:06:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:33.091 /dev/nbd1 00:06:33.091 10:06:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.091 10:06:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.091 10:06:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:33.091 10:06:52 -- common/autotest_common.sh@867 -- # local i 00:06:33.091 10:06:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:33.091 10:06:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:33.091 10:06:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:33.091 10:06:52 -- common/autotest_common.sh@871 -- # break 00:06:33.091 10:06:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:33.091 10:06:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:33.091 10:06:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.091 1+0 records in 00:06:33.091 1+0 records out 00:06:33.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276877 s, 14.8 MB/s 00:06:33.091 10:06:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.091 10:06:52 -- common/autotest_common.sh@884 -- # size=4096 00:06:33.091 10:06:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.091 10:06:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:33.091 10:06:52 -- common/autotest_common.sh@887 -- # return 0 00:06:33.091 10:06:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.091 10:06:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.091 10:06:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.091 10:06:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.091 10:06:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.350 { 00:06:33.350 "bdev_name": "Malloc0", 00:06:33.350 "nbd_device": "/dev/nbd0" 00:06:33.350 }, 00:06:33.350 { 00:06:33.350 "bdev_name": "Malloc1", 00:06:33.350 "nbd_device": "/dev/nbd1" 00:06:33.350 } 00:06:33.350 ]' 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.350 { 00:06:33.350 "bdev_name": "Malloc0", 00:06:33.350 "nbd_device": "/dev/nbd0" 00:06:33.350 }, 00:06:33.350 { 00:06:33.350 "bdev_name": "Malloc1", 00:06:33.350 "nbd_device": "/dev/nbd1" 00:06:33.350 } 00:06:33.350 ]' 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.350 /dev/nbd1' 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.350 /dev/nbd1' 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.350 256+0 records in 00:06:33.350 256+0 records out 00:06:33.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100345 s, 104 MB/s 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.350 256+0 records in 00:06:33.350 256+0 records out 00:06:33.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251643 s, 41.7 MB/s 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.350 256+0 records in 00:06:33.350 256+0 records out 00:06:33.350 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247992 s, 42.3 MB/s 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@51 -- # local i 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.350 10:06:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.608 10:06:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.608 10:06:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.608 10:06:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.608 10:06:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.608 10:06:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.608 10:06:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.608 10:06:53 -- bdev/nbd_common.sh@41 -- # break 00:06:33.608 10:06:53 -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.608 10:06:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.608 10:06:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.176 10:06:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.176 10:06:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.176 10:06:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.176 10:06:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.176 10:06:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.176 10:06:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.176 10:06:53 -- bdev/nbd_common.sh@41 -- # break 00:06:34.176 10:06:53 -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.176 10:06:53 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.176 10:06:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.176 10:06:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.434 10:06:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.434 10:06:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.434 10:06:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.434 10:06:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.434 10:06:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.434 10:06:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.434 10:06:53 -- bdev/nbd_common.sh@65 -- # true 00:06:34.434 10:06:53 -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.434 10:06:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.434 10:06:53 -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.434 10:06:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.434 10:06:53 -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.434 10:06:53 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.693 10:06:54 -- event/event.sh@35 -- # sleep 3 00:06:34.693 [2024-11-19 10:06:54.209216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.951 [2024-11-19 10:06:54.244592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.951 [2024-11-19 10:06:54.244606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.951 [2024-11-19 10:06:54.275261] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.951 [2024-11-19 10:06:54.275334] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.253 10:06:57 -- event/event.sh@23 -- # for i in {0..2} 00:06:38.253 spdk_app_start Round 1 00:06:38.253 10:06:57 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:38.253 10:06:57 -- event/event.sh@25 -- # waitforlisten 68646 /var/tmp/spdk-nbd.sock 00:06:38.253 10:06:57 -- common/autotest_common.sh@829 -- # '[' -z 68646 ']' 00:06:38.253 10:06:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.253 10:06:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.253 10:06:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.253 10:06:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.253 10:06:57 -- common/autotest_common.sh@10 -- # set +x 00:06:38.253 10:06:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.253 10:06:57 -- common/autotest_common.sh@862 -- # return 0 00:06:38.253 10:06:57 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.253 Malloc0 00:06:38.253 10:06:57 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.512 Malloc1 00:06:38.512 10:06:57 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.512 10:06:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.512 10:06:57 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.512 10:06:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.512 10:06:57 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.512 10:06:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.512 10:06:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.512 10:06:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.512 10:06:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.512 10:06:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.512 10:06:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.512 10:06:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.512 10:06:57 -- bdev/nbd_common.sh@12 -- # local i 00:06:38.512 10:06:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.512 10:06:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.512 10:06:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:38.769 /dev/nbd0 00:06:38.769 10:06:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:38.769 10:06:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:38.769 10:06:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:38.769 10:06:58 -- common/autotest_common.sh@867 -- # local i 00:06:38.769 10:06:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:38.769 10:06:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:38.770 10:06:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:38.770 10:06:58 -- common/autotest_common.sh@871 -- # break 00:06:38.770 10:06:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:38.770 10:06:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:38.770 10:06:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.770 1+0 records in 00:06:38.770 1+0 records out 00:06:38.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214893 s, 19.1 MB/s 00:06:38.770 10:06:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.770 10:06:58 -- common/autotest_common.sh@884 -- # size=4096 00:06:38.770 10:06:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.770 10:06:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:38.770 10:06:58 -- common/autotest_common.sh@887 -- # return 0 00:06:38.770 10:06:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.770 10:06:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.770 10:06:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.028 /dev/nbd1 00:06:39.286 10:06:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.286 10:06:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.286 10:06:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:39.286 10:06:58 -- common/autotest_common.sh@867 -- # local i 00:06:39.286 10:06:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:39.286 10:06:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:39.286 10:06:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:39.286 10:06:58 -- common/autotest_common.sh@871 -- # break 00:06:39.286 10:06:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:39.286 10:06:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:39.286 10:06:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.286 1+0 records in 00:06:39.286 1+0 records out 00:06:39.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275417 s, 14.9 MB/s 00:06:39.286 10:06:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.286 10:06:58 -- common/autotest_common.sh@884 -- # size=4096 00:06:39.286 10:06:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.286 10:06:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:39.286 10:06:58 -- common/autotest_common.sh@887 -- # return 0 00:06:39.286 10:06:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.286 10:06:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.286 10:06:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.286 10:06:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.286 10:06:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.545 { 00:06:39.545 "bdev_name": "Malloc0", 00:06:39.545 "nbd_device": "/dev/nbd0" 00:06:39.545 }, 00:06:39.545 { 00:06:39.545 "bdev_name": "Malloc1", 00:06:39.545 "nbd_device": "/dev/nbd1" 00:06:39.545 } 00:06:39.545 ]' 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.545 { 00:06:39.545 "bdev_name": "Malloc0", 00:06:39.545 "nbd_device": "/dev/nbd0" 00:06:39.545 }, 00:06:39.545 { 00:06:39.545 "bdev_name": "Malloc1", 00:06:39.545 "nbd_device": "/dev/nbd1" 00:06:39.545 } 00:06:39.545 ]' 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.545 /dev/nbd1' 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.545 /dev/nbd1' 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.545 256+0 records in 00:06:39.545 256+0 records out 00:06:39.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105163 s, 99.7 MB/s 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.545 10:06:58 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.545 256+0 records in 00:06:39.545 256+0 records out 00:06:39.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246315 s, 42.6 MB/s 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.545 256+0 records in 00:06:39.545 256+0 records out 00:06:39.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276785 s, 37.9 MB/s 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@51 -- # local i 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.545 10:06:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.112 10:06:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.112 10:06:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.112 10:06:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.112 10:06:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.112 10:06:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.112 10:06:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.112 10:06:59 -- bdev/nbd_common.sh@41 -- # break 00:06:40.112 10:06:59 -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.112 10:06:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.112 10:06:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.371 10:06:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.371 10:06:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.371 10:06:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.371 10:06:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.371 10:06:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.371 10:06:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.371 10:06:59 -- bdev/nbd_common.sh@41 -- # break 00:06:40.371 10:06:59 -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.371 10:06:59 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.371 10:06:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.371 10:06:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.629 10:07:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.629 10:07:00 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.629 10:07:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.629 10:07:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.629 10:07:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.629 10:07:00 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.629 10:07:00 -- bdev/nbd_common.sh@65 -- # true 00:06:40.629 10:07:00 -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.629 10:07:00 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.629 10:07:00 -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.629 10:07:00 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.629 10:07:00 -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.629 10:07:00 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:40.886 10:07:00 -- event/event.sh@35 -- # sleep 3 00:06:41.144 [2024-11-19 10:07:00.464758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.144 [2024-11-19 10:07:00.499998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.144 [2024-11-19 10:07:00.500010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.144 [2024-11-19 10:07:00.530100] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.144 [2024-11-19 10:07:00.530160] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:44.428 10:07:03 -- event/event.sh@23 -- # for i in {0..2} 00:06:44.428 spdk_app_start Round 2 00:06:44.428 10:07:03 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:44.428 10:07:03 -- event/event.sh@25 -- # waitforlisten 68646 /var/tmp/spdk-nbd.sock 00:06:44.428 10:07:03 -- common/autotest_common.sh@829 -- # '[' -z 68646 ']' 00:06:44.428 10:07:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.428 10:07:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.428 10:07:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.428 10:07:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.428 10:07:03 -- common/autotest_common.sh@10 -- # set +x 00:06:44.428 10:07:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.428 10:07:03 -- common/autotest_common.sh@862 -- # return 0 00:06:44.428 10:07:03 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.428 Malloc0 00:06:44.428 10:07:03 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.686 Malloc1 00:06:44.686 10:07:04 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.686 10:07:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.686 10:07:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.686 10:07:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.686 10:07:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.686 10:07:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.686 10:07:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.686 10:07:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.686 10:07:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.686 10:07:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.686 10:07:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.686 10:07:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.686 10:07:04 -- bdev/nbd_common.sh@12 -- # local i 00:06:44.686 10:07:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.686 10:07:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.686 10:07:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.944 /dev/nbd0 00:06:44.944 10:07:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.203 10:07:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.203 10:07:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:45.203 10:07:04 -- common/autotest_common.sh@867 -- # local i 00:06:45.203 10:07:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:45.203 10:07:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:45.203 10:07:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:45.203 10:07:04 -- common/autotest_common.sh@871 -- # break 00:06:45.203 10:07:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:45.203 10:07:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:45.203 10:07:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.203 1+0 records in 00:06:45.203 1+0 records out 00:06:45.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354805 s, 11.5 MB/s 00:06:45.203 10:07:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.203 10:07:04 -- common/autotest_common.sh@884 -- # size=4096 00:06:45.203 10:07:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.203 10:07:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:45.203 10:07:04 -- common/autotest_common.sh@887 -- # return 0 00:06:45.203 10:07:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.203 10:07:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.203 10:07:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.461 /dev/nbd1 00:06:45.461 10:07:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.461 10:07:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.461 10:07:04 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:45.461 10:07:04 -- common/autotest_common.sh@867 -- # local i 00:06:45.461 10:07:04 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:45.461 10:07:04 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:45.461 10:07:04 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:45.461 10:07:04 -- common/autotest_common.sh@871 -- # break 00:06:45.461 10:07:04 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:45.461 10:07:04 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:45.461 10:07:04 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.461 1+0 records in 00:06:45.461 1+0 records out 00:06:45.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293567 s, 14.0 MB/s 00:06:45.461 10:07:04 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.461 10:07:04 -- common/autotest_common.sh@884 -- # size=4096 00:06:45.461 10:07:04 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.461 10:07:04 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:45.461 10:07:04 -- common/autotest_common.sh@887 -- # return 0 00:06:45.461 10:07:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.461 10:07:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.461 10:07:04 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.461 10:07:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.461 10:07:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.721 { 00:06:45.721 "bdev_name": "Malloc0", 00:06:45.721 "nbd_device": "/dev/nbd0" 00:06:45.721 }, 00:06:45.721 { 00:06:45.721 "bdev_name": "Malloc1", 00:06:45.721 "nbd_device": "/dev/nbd1" 00:06:45.721 } 00:06:45.721 ]' 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.721 { 00:06:45.721 "bdev_name": "Malloc0", 00:06:45.721 "nbd_device": "/dev/nbd0" 00:06:45.721 }, 00:06:45.721 { 00:06:45.721 "bdev_name": "Malloc1", 00:06:45.721 "nbd_device": "/dev/nbd1" 00:06:45.721 } 00:06:45.721 ]' 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.721 /dev/nbd1' 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.721 /dev/nbd1' 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.721 256+0 records in 00:06:45.721 256+0 records out 00:06:45.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465673 s, 225 MB/s 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.721 256+0 records in 00:06:45.721 256+0 records out 00:06:45.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254424 s, 41.2 MB/s 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.721 256+0 records in 00:06:45.721 256+0 records out 00:06:45.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261868 s, 40.0 MB/s 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.721 10:07:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.981 10:07:05 -- bdev/nbd_common.sh@51 -- # local i 00:06:45.981 10:07:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.981 10:07:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.981 10:07:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.981 10:07:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.981 10:07:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.981 10:07:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.981 10:07:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.981 10:07:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.240 10:07:05 -- bdev/nbd_common.sh@41 -- # break 00:06:46.240 10:07:05 -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.240 10:07:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.240 10:07:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.498 10:07:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.498 10:07:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.498 10:07:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.498 10:07:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.498 10:07:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.498 10:07:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.498 10:07:05 -- bdev/nbd_common.sh@41 -- # break 00:06:46.498 10:07:05 -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.498 10:07:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.499 10:07:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.499 10:07:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.757 10:07:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.757 10:07:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.757 10:07:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.757 10:07:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.757 10:07:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.757 10:07:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.757 10:07:06 -- bdev/nbd_common.sh@65 -- # true 00:06:46.757 10:07:06 -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.757 10:07:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.757 10:07:06 -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.757 10:07:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.757 10:07:06 -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.757 10:07:06 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:47.017 10:07:06 -- event/event.sh@35 -- # sleep 3 00:06:47.276 [2024-11-19 10:07:06.661196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.276 [2024-11-19 10:07:06.696203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.276 [2024-11-19 10:07:06.696213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.276 [2024-11-19 10:07:06.725251] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.276 [2024-11-19 10:07:06.725323] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.563 10:07:09 -- event/event.sh@38 -- # waitforlisten 68646 /var/tmp/spdk-nbd.sock 00:06:50.563 10:07:09 -- common/autotest_common.sh@829 -- # '[' -z 68646 ']' 00:06:50.563 10:07:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.563 10:07:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.563 10:07:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.563 10:07:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.563 10:07:09 -- common/autotest_common.sh@10 -- # set +x 00:06:50.563 10:07:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.563 10:07:09 -- common/autotest_common.sh@862 -- # return 0 00:06:50.563 10:07:09 -- event/event.sh@39 -- # killprocess 68646 00:06:50.563 10:07:09 -- common/autotest_common.sh@936 -- # '[' -z 68646 ']' 00:06:50.563 10:07:09 -- common/autotest_common.sh@940 -- # kill -0 68646 00:06:50.563 10:07:09 -- common/autotest_common.sh@941 -- # uname 00:06:50.563 10:07:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:50.563 10:07:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68646 00:06:50.563 10:07:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:50.563 10:07:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:50.563 killing process with pid 68646 00:06:50.563 10:07:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68646' 00:06:50.563 10:07:09 -- common/autotest_common.sh@955 -- # kill 68646 00:06:50.563 10:07:09 -- common/autotest_common.sh@960 -- # wait 68646 00:06:50.563 spdk_app_start is called in Round 0. 00:06:50.563 Shutdown signal received, stop current app iteration 00:06:50.563 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:50.563 spdk_app_start is called in Round 1. 00:06:50.563 Shutdown signal received, stop current app iteration 00:06:50.563 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:50.563 spdk_app_start is called in Round 2. 00:06:50.563 Shutdown signal received, stop current app iteration 00:06:50.563 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:50.563 spdk_app_start is called in Round 3. 00:06:50.563 Shutdown signal received, stop current app iteration 00:06:50.563 10:07:09 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:50.563 10:07:09 -- event/event.sh@42 -- # return 0 00:06:50.563 00:06:50.563 real 0m19.109s 00:06:50.563 user 0m43.956s 00:06:50.563 sys 0m2.884s 00:06:50.563 10:07:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.563 10:07:09 -- common/autotest_common.sh@10 -- # set +x 00:06:50.563 ************************************ 00:06:50.563 END TEST app_repeat 00:06:50.563 ************************************ 00:06:50.563 10:07:10 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:50.563 10:07:10 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:50.563 10:07:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:50.563 10:07:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.563 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:50.563 ************************************ 00:06:50.563 START TEST cpu_locks 00:06:50.563 ************************************ 00:06:50.563 10:07:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:50.823 * Looking for test storage... 00:06:50.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:50.823 10:07:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:50.823 10:07:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:50.823 10:07:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:50.823 10:07:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:50.823 10:07:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:50.823 10:07:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:50.823 10:07:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:50.823 10:07:10 -- scripts/common.sh@335 -- # IFS=.-: 00:06:50.823 10:07:10 -- scripts/common.sh@335 -- # read -ra ver1 00:06:50.823 10:07:10 -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.823 10:07:10 -- scripts/common.sh@336 -- # read -ra ver2 00:06:50.823 10:07:10 -- scripts/common.sh@337 -- # local 'op=<' 00:06:50.823 10:07:10 -- scripts/common.sh@339 -- # ver1_l=2 00:06:50.823 10:07:10 -- scripts/common.sh@340 -- # ver2_l=1 00:06:50.823 10:07:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:50.823 10:07:10 -- scripts/common.sh@343 -- # case "$op" in 00:06:50.823 10:07:10 -- scripts/common.sh@344 -- # : 1 00:06:50.823 10:07:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:50.823 10:07:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.823 10:07:10 -- scripts/common.sh@364 -- # decimal 1 00:06:50.823 10:07:10 -- scripts/common.sh@352 -- # local d=1 00:06:50.823 10:07:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.823 10:07:10 -- scripts/common.sh@354 -- # echo 1 00:06:50.823 10:07:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:50.823 10:07:10 -- scripts/common.sh@365 -- # decimal 2 00:06:50.823 10:07:10 -- scripts/common.sh@352 -- # local d=2 00:06:50.823 10:07:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.823 10:07:10 -- scripts/common.sh@354 -- # echo 2 00:06:50.823 10:07:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:50.823 10:07:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:50.823 10:07:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:50.823 10:07:10 -- scripts/common.sh@367 -- # return 0 00:06:50.823 10:07:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.823 10:07:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:50.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.823 --rc genhtml_branch_coverage=1 00:06:50.823 --rc genhtml_function_coverage=1 00:06:50.823 --rc genhtml_legend=1 00:06:50.823 --rc geninfo_all_blocks=1 00:06:50.823 --rc geninfo_unexecuted_blocks=1 00:06:50.823 00:06:50.823 ' 00:06:50.823 10:07:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:50.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.823 --rc genhtml_branch_coverage=1 00:06:50.823 --rc genhtml_function_coverage=1 00:06:50.823 --rc genhtml_legend=1 00:06:50.823 --rc geninfo_all_blocks=1 00:06:50.823 --rc geninfo_unexecuted_blocks=1 00:06:50.823 00:06:50.823 ' 00:06:50.823 10:07:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:50.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.823 --rc genhtml_branch_coverage=1 00:06:50.823 --rc genhtml_function_coverage=1 00:06:50.823 --rc genhtml_legend=1 00:06:50.823 --rc geninfo_all_blocks=1 00:06:50.823 --rc geninfo_unexecuted_blocks=1 00:06:50.823 00:06:50.823 ' 00:06:50.823 10:07:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:50.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.823 --rc genhtml_branch_coverage=1 00:06:50.823 --rc genhtml_function_coverage=1 00:06:50.823 --rc genhtml_legend=1 00:06:50.823 --rc geninfo_all_blocks=1 00:06:50.823 --rc geninfo_unexecuted_blocks=1 00:06:50.823 00:06:50.823 ' 00:06:50.823 10:07:10 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:50.823 10:07:10 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:50.823 10:07:10 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:50.823 10:07:10 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:50.823 10:07:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:50.823 10:07:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.823 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:50.823 ************************************ 00:06:50.823 START TEST default_locks 00:06:50.823 ************************************ 00:06:50.823 10:07:10 -- common/autotest_common.sh@1114 -- # default_locks 00:06:50.823 10:07:10 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69279 00:06:50.823 10:07:10 -- event/cpu_locks.sh@47 -- # waitforlisten 69279 00:06:50.823 10:07:10 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.823 10:07:10 -- common/autotest_common.sh@829 -- # '[' -z 69279 ']' 00:06:50.823 10:07:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.823 10:07:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.823 10:07:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.823 10:07:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.823 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:06:50.823 [2024-11-19 10:07:10.306650] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:50.823 [2024-11-19 10:07:10.306777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69279 ] 00:06:51.082 [2024-11-19 10:07:10.444696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.082 [2024-11-19 10:07:10.484171] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.082 [2024-11-19 10:07:10.484400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.028 10:07:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.028 10:07:11 -- common/autotest_common.sh@862 -- # return 0 00:06:52.028 10:07:11 -- event/cpu_locks.sh@49 -- # locks_exist 69279 00:06:52.028 10:07:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.028 10:07:11 -- event/cpu_locks.sh@22 -- # lslocks -p 69279 00:06:52.287 10:07:11 -- event/cpu_locks.sh@50 -- # killprocess 69279 00:06:52.287 10:07:11 -- common/autotest_common.sh@936 -- # '[' -z 69279 ']' 00:06:52.287 10:07:11 -- common/autotest_common.sh@940 -- # kill -0 69279 00:06:52.287 10:07:11 -- common/autotest_common.sh@941 -- # uname 00:06:52.287 10:07:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:52.287 10:07:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69279 00:06:52.547 10:07:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:52.547 killing process with pid 69279 00:06:52.547 10:07:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:52.547 10:07:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69279' 00:06:52.547 10:07:11 -- common/autotest_common.sh@955 -- # kill 69279 00:06:52.547 10:07:11 -- common/autotest_common.sh@960 -- # wait 69279 00:06:52.547 10:07:12 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69279 00:06:52.547 10:07:12 -- common/autotest_common.sh@650 -- # local es=0 00:06:52.547 10:07:12 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69279 00:06:52.547 10:07:12 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:52.547 10:07:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.547 10:07:12 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:52.547 10:07:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.547 10:07:12 -- common/autotest_common.sh@653 -- # waitforlisten 69279 00:06:52.547 10:07:12 -- common/autotest_common.sh@829 -- # '[' -z 69279 ']' 00:06:52.547 10:07:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.547 10:07:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.547 10:07:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.547 10:07:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.547 10:07:12 -- common/autotest_common.sh@10 -- # set +x 00:06:52.547 ERROR: process (pid: 69279) is no longer running 00:06:52.547 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69279) - No such process 00:06:52.547 10:07:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.547 10:07:12 -- common/autotest_common.sh@862 -- # return 1 00:06:52.547 10:07:12 -- common/autotest_common.sh@653 -- # es=1 00:06:52.547 10:07:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.547 10:07:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.547 10:07:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.547 10:07:12 -- event/cpu_locks.sh@54 -- # no_locks 00:06:52.547 10:07:12 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:52.547 10:07:12 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:52.547 10:07:12 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:52.547 00:06:52.547 real 0m1.833s 00:06:52.547 user 0m2.152s 00:06:52.547 sys 0m0.484s 00:06:52.547 10:07:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.547 ************************************ 00:06:52.547 END TEST default_locks 00:06:52.547 ************************************ 00:06:52.547 10:07:12 -- common/autotest_common.sh@10 -- # set +x 00:06:52.808 10:07:12 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:52.808 10:07:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:52.808 10:07:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.808 10:07:12 -- common/autotest_common.sh@10 -- # set +x 00:06:52.808 ************************************ 00:06:52.808 START TEST default_locks_via_rpc 00:06:52.808 ************************************ 00:06:52.808 10:07:12 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:52.808 10:07:12 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69343 00:06:52.808 10:07:12 -- event/cpu_locks.sh@63 -- # waitforlisten 69343 00:06:52.808 10:07:12 -- common/autotest_common.sh@829 -- # '[' -z 69343 ']' 00:06:52.808 10:07:12 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.808 10:07:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.808 10:07:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.808 10:07:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.808 10:07:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.808 10:07:12 -- common/autotest_common.sh@10 -- # set +x 00:06:52.808 [2024-11-19 10:07:12.191056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:52.808 [2024-11-19 10:07:12.191192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69343 ] 00:06:52.808 [2024-11-19 10:07:12.328888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.067 [2024-11-19 10:07:12.368153] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:53.067 [2024-11-19 10:07:12.368338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.003 10:07:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.003 10:07:13 -- common/autotest_common.sh@862 -- # return 0 00:06:54.003 10:07:13 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:54.003 10:07:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.003 10:07:13 -- common/autotest_common.sh@10 -- # set +x 00:06:54.003 10:07:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.003 10:07:13 -- event/cpu_locks.sh@67 -- # no_locks 00:06:54.003 10:07:13 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:54.003 10:07:13 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:54.003 10:07:13 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:54.003 10:07:13 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:54.003 10:07:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.003 10:07:13 -- common/autotest_common.sh@10 -- # set +x 00:06:54.003 10:07:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.003 10:07:13 -- event/cpu_locks.sh@71 -- # locks_exist 69343 00:06:54.003 10:07:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.003 10:07:13 -- event/cpu_locks.sh@22 -- # lslocks -p 69343 00:06:54.262 10:07:13 -- event/cpu_locks.sh@73 -- # killprocess 69343 00:06:54.262 10:07:13 -- common/autotest_common.sh@936 -- # '[' -z 69343 ']' 00:06:54.262 10:07:13 -- common/autotest_common.sh@940 -- # kill -0 69343 00:06:54.262 10:07:13 -- common/autotest_common.sh@941 -- # uname 00:06:54.262 10:07:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:54.262 10:07:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69343 00:06:54.262 10:07:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:54.262 killing process with pid 69343 00:06:54.262 10:07:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:54.262 10:07:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69343' 00:06:54.262 10:07:13 -- common/autotest_common.sh@955 -- # kill 69343 00:06:54.262 10:07:13 -- common/autotest_common.sh@960 -- # wait 69343 00:06:54.521 00:06:54.521 real 0m1.859s 00:06:54.521 user 0m2.211s 00:06:54.521 sys 0m0.471s 00:06:54.521 10:07:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.521 ************************************ 00:06:54.521 END TEST default_locks_via_rpc 00:06:54.521 ************************************ 00:06:54.521 10:07:13 -- common/autotest_common.sh@10 -- # set +x 00:06:54.521 10:07:14 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:54.521 10:07:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:54.521 10:07:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.521 10:07:14 -- common/autotest_common.sh@10 -- # set +x 00:06:54.521 ************************************ 00:06:54.521 START TEST non_locking_app_on_locked_coremask 00:06:54.521 ************************************ 00:06:54.521 10:07:14 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:54.521 10:07:14 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69412 00:06:54.521 10:07:14 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.521 10:07:14 -- event/cpu_locks.sh@81 -- # waitforlisten 69412 /var/tmp/spdk.sock 00:06:54.521 10:07:14 -- common/autotest_common.sh@829 -- # '[' -z 69412 ']' 00:06:54.521 10:07:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.521 10:07:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.521 10:07:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.521 10:07:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.521 10:07:14 -- common/autotest_common.sh@10 -- # set +x 00:06:54.780 [2024-11-19 10:07:14.099150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:54.780 [2024-11-19 10:07:14.099256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69412 ] 00:06:54.780 [2024-11-19 10:07:14.236042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.780 [2024-11-19 10:07:14.275339] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:54.780 [2024-11-19 10:07:14.275522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.717 10:07:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.717 10:07:15 -- common/autotest_common.sh@862 -- # return 0 00:06:55.717 10:07:15 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69440 00:06:55.717 10:07:15 -- event/cpu_locks.sh@85 -- # waitforlisten 69440 /var/tmp/spdk2.sock 00:06:55.717 10:07:15 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:55.717 10:07:15 -- common/autotest_common.sh@829 -- # '[' -z 69440 ']' 00:06:55.717 10:07:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.717 10:07:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.717 10:07:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.717 10:07:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.717 10:07:15 -- common/autotest_common.sh@10 -- # set +x 00:06:55.717 [2024-11-19 10:07:15.244672] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:55.717 [2024-11-19 10:07:15.244807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69440 ] 00:06:55.976 [2024-11-19 10:07:15.392341] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.976 [2024-11-19 10:07:15.392393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.976 [2024-11-19 10:07:15.461166] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:55.976 [2024-11-19 10:07:15.461308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.912 10:07:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.912 10:07:16 -- common/autotest_common.sh@862 -- # return 0 00:06:56.912 10:07:16 -- event/cpu_locks.sh@87 -- # locks_exist 69412 00:06:56.912 10:07:16 -- event/cpu_locks.sh@22 -- # lslocks -p 69412 00:06:56.912 10:07:16 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.850 10:07:17 -- event/cpu_locks.sh@89 -- # killprocess 69412 00:06:57.850 10:07:17 -- common/autotest_common.sh@936 -- # '[' -z 69412 ']' 00:06:57.850 10:07:17 -- common/autotest_common.sh@940 -- # kill -0 69412 00:06:57.850 10:07:17 -- common/autotest_common.sh@941 -- # uname 00:06:57.850 10:07:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:57.850 10:07:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69412 00:06:57.850 10:07:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:57.850 killing process with pid 69412 00:06:57.850 10:07:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:57.850 10:07:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69412' 00:06:57.850 10:07:17 -- common/autotest_common.sh@955 -- # kill 69412 00:06:57.850 10:07:17 -- common/autotest_common.sh@960 -- # wait 69412 00:06:58.109 10:07:17 -- event/cpu_locks.sh@90 -- # killprocess 69440 00:06:58.109 10:07:17 -- common/autotest_common.sh@936 -- # '[' -z 69440 ']' 00:06:58.109 10:07:17 -- common/autotest_common.sh@940 -- # kill -0 69440 00:06:58.109 10:07:17 -- common/autotest_common.sh@941 -- # uname 00:06:58.109 10:07:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:58.109 10:07:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69440 00:06:58.109 10:07:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:58.109 killing process with pid 69440 00:06:58.109 10:07:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:58.109 10:07:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69440' 00:06:58.109 10:07:17 -- common/autotest_common.sh@955 -- # kill 69440 00:06:58.109 10:07:17 -- common/autotest_common.sh@960 -- # wait 69440 00:06:58.368 00:06:58.368 real 0m3.857s 00:06:58.368 user 0m4.676s 00:06:58.368 sys 0m0.962s 00:06:58.368 10:07:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.368 10:07:17 -- common/autotest_common.sh@10 -- # set +x 00:06:58.368 ************************************ 00:06:58.368 END TEST non_locking_app_on_locked_coremask 00:06:58.368 ************************************ 00:06:58.627 10:07:17 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:58.627 10:07:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:58.627 10:07:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.627 10:07:17 -- common/autotest_common.sh@10 -- # set +x 00:06:58.627 ************************************ 00:06:58.627 START TEST locking_app_on_unlocked_coremask 00:06:58.627 ************************************ 00:06:58.627 10:07:17 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:58.627 10:07:17 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69520 00:06:58.627 10:07:17 -- event/cpu_locks.sh@99 -- # waitforlisten 69520 /var/tmp/spdk.sock 00:06:58.627 10:07:17 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:58.627 10:07:17 -- common/autotest_common.sh@829 -- # '[' -z 69520 ']' 00:06:58.627 10:07:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.627 10:07:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.627 10:07:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.627 10:07:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.627 10:07:17 -- common/autotest_common.sh@10 -- # set +x 00:06:58.627 [2024-11-19 10:07:18.001544] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:58.627 [2024-11-19 10:07:18.001630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69520 ] 00:06:58.628 [2024-11-19 10:07:18.137686] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.628 [2024-11-19 10:07:18.137760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.628 [2024-11-19 10:07:18.172749] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:58.628 [2024-11-19 10:07:18.172959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.004 10:07:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.004 10:07:19 -- common/autotest_common.sh@862 -- # return 0 00:07:00.004 10:07:19 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69548 00:07:00.004 10:07:19 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.004 10:07:19 -- event/cpu_locks.sh@103 -- # waitforlisten 69548 /var/tmp/spdk2.sock 00:07:00.004 10:07:19 -- common/autotest_common.sh@829 -- # '[' -z 69548 ']' 00:07:00.004 10:07:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.004 10:07:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.004 10:07:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.004 10:07:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.004 10:07:19 -- common/autotest_common.sh@10 -- # set +x 00:07:00.004 [2024-11-19 10:07:19.274019] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:00.004 [2024-11-19 10:07:19.274133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69548 ] 00:07:00.004 [2024-11-19 10:07:19.419336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.004 [2024-11-19 10:07:19.485317] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:00.004 [2024-11-19 10:07:19.485473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.940 10:07:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.940 10:07:20 -- common/autotest_common.sh@862 -- # return 0 00:07:00.940 10:07:20 -- event/cpu_locks.sh@105 -- # locks_exist 69548 00:07:00.940 10:07:20 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.940 10:07:20 -- event/cpu_locks.sh@22 -- # lslocks -p 69548 00:07:01.878 10:07:21 -- event/cpu_locks.sh@107 -- # killprocess 69520 00:07:01.878 10:07:21 -- common/autotest_common.sh@936 -- # '[' -z 69520 ']' 00:07:01.878 10:07:21 -- common/autotest_common.sh@940 -- # kill -0 69520 00:07:01.878 10:07:21 -- common/autotest_common.sh@941 -- # uname 00:07:01.878 10:07:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:01.878 10:07:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69520 00:07:01.878 10:07:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:01.878 10:07:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:01.878 killing process with pid 69520 00:07:01.878 10:07:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69520' 00:07:01.878 10:07:21 -- common/autotest_common.sh@955 -- # kill 69520 00:07:01.878 10:07:21 -- common/autotest_common.sh@960 -- # wait 69520 00:07:02.137 10:07:21 -- event/cpu_locks.sh@108 -- # killprocess 69548 00:07:02.137 10:07:21 -- common/autotest_common.sh@936 -- # '[' -z 69548 ']' 00:07:02.138 10:07:21 -- common/autotest_common.sh@940 -- # kill -0 69548 00:07:02.138 10:07:21 -- common/autotest_common.sh@941 -- # uname 00:07:02.138 10:07:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.138 10:07:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69548 00:07:02.138 10:07:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.138 10:07:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.138 10:07:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69548' 00:07:02.138 killing process with pid 69548 00:07:02.138 10:07:21 -- common/autotest_common.sh@955 -- # kill 69548 00:07:02.138 10:07:21 -- common/autotest_common.sh@960 -- # wait 69548 00:07:02.396 00:07:02.396 real 0m3.900s 00:07:02.396 user 0m4.834s 00:07:02.396 sys 0m0.965s 00:07:02.396 10:07:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.396 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:07:02.396 ************************************ 00:07:02.396 END TEST locking_app_on_unlocked_coremask 00:07:02.396 ************************************ 00:07:02.396 10:07:21 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:02.396 10:07:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:02.396 10:07:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.396 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:07:02.396 ************************************ 00:07:02.396 START TEST locking_app_on_locked_coremask 00:07:02.396 ************************************ 00:07:02.396 10:07:21 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:07:02.396 10:07:21 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69616 00:07:02.396 10:07:21 -- event/cpu_locks.sh@116 -- # waitforlisten 69616 /var/tmp/spdk.sock 00:07:02.396 10:07:21 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.396 10:07:21 -- common/autotest_common.sh@829 -- # '[' -z 69616 ']' 00:07:02.396 10:07:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.396 10:07:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.396 10:07:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.396 10:07:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.396 10:07:21 -- common/autotest_common.sh@10 -- # set +x 00:07:02.656 [2024-11-19 10:07:21.984274] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:02.656 [2024-11-19 10:07:21.984371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69616 ] 00:07:02.656 [2024-11-19 10:07:22.111990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.656 [2024-11-19 10:07:22.145752] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:02.656 [2024-11-19 10:07:22.145964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.592 10:07:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.592 10:07:22 -- common/autotest_common.sh@862 -- # return 0 00:07:03.592 10:07:22 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:03.592 10:07:22 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69650 00:07:03.592 10:07:22 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69650 /var/tmp/spdk2.sock 00:07:03.592 10:07:22 -- common/autotest_common.sh@650 -- # local es=0 00:07:03.592 10:07:22 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69650 /var/tmp/spdk2.sock 00:07:03.592 10:07:22 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:03.592 10:07:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.592 10:07:22 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:03.592 10:07:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.592 10:07:22 -- common/autotest_common.sh@653 -- # waitforlisten 69650 /var/tmp/spdk2.sock 00:07:03.592 10:07:22 -- common/autotest_common.sh@829 -- # '[' -z 69650 ']' 00:07:03.592 10:07:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.592 10:07:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.592 10:07:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.592 10:07:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.592 10:07:22 -- common/autotest_common.sh@10 -- # set +x 00:07:03.592 [2024-11-19 10:07:23.057864] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:03.592 [2024-11-19 10:07:23.058007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69650 ] 00:07:03.851 [2024-11-19 10:07:23.201365] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69616 has claimed it. 00:07:03.851 [2024-11-19 10:07:23.201441] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:04.419 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69650) - No such process 00:07:04.419 ERROR: process (pid: 69650) is no longer running 00:07:04.419 10:07:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.419 10:07:23 -- common/autotest_common.sh@862 -- # return 1 00:07:04.419 10:07:23 -- common/autotest_common.sh@653 -- # es=1 00:07:04.419 10:07:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.419 10:07:23 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.419 10:07:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.419 10:07:23 -- event/cpu_locks.sh@122 -- # locks_exist 69616 00:07:04.419 10:07:23 -- event/cpu_locks.sh@22 -- # lslocks -p 69616 00:07:04.419 10:07:23 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.677 10:07:24 -- event/cpu_locks.sh@124 -- # killprocess 69616 00:07:04.677 10:07:24 -- common/autotest_common.sh@936 -- # '[' -z 69616 ']' 00:07:04.677 10:07:24 -- common/autotest_common.sh@940 -- # kill -0 69616 00:07:04.677 10:07:24 -- common/autotest_common.sh@941 -- # uname 00:07:04.677 10:07:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:04.677 10:07:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69616 00:07:04.935 10:07:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:04.935 10:07:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:04.935 killing process with pid 69616 00:07:04.935 10:07:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69616' 00:07:04.935 10:07:24 -- common/autotest_common.sh@955 -- # kill 69616 00:07:04.935 10:07:24 -- common/autotest_common.sh@960 -- # wait 69616 00:07:04.935 00:07:04.935 real 0m2.551s 00:07:04.935 user 0m3.114s 00:07:04.935 sys 0m0.598s 00:07:04.935 10:07:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.935 10:07:24 -- common/autotest_common.sh@10 -- # set +x 00:07:04.935 ************************************ 00:07:04.935 END TEST locking_app_on_locked_coremask 00:07:04.935 ************************************ 00:07:05.194 10:07:24 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:05.194 10:07:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.194 10:07:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.194 10:07:24 -- common/autotest_common.sh@10 -- # set +x 00:07:05.194 ************************************ 00:07:05.194 START TEST locking_overlapped_coremask 00:07:05.194 ************************************ 00:07:05.194 10:07:24 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:07:05.194 10:07:24 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69701 00:07:05.194 10:07:24 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:05.194 10:07:24 -- event/cpu_locks.sh@133 -- # waitforlisten 69701 /var/tmp/spdk.sock 00:07:05.194 10:07:24 -- common/autotest_common.sh@829 -- # '[' -z 69701 ']' 00:07:05.194 10:07:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.194 10:07:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.194 10:07:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.194 10:07:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.194 10:07:24 -- common/autotest_common.sh@10 -- # set +x 00:07:05.195 [2024-11-19 10:07:24.575971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:05.195 [2024-11-19 10:07:24.576095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69701 ] 00:07:05.195 [2024-11-19 10:07:24.718189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.453 [2024-11-19 10:07:24.753792] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:05.453 [2024-11-19 10:07:24.754099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.453 [2024-11-19 10:07:24.754246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.453 [2024-11-19 10:07:24.754250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.391 10:07:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.391 10:07:25 -- common/autotest_common.sh@862 -- # return 0 00:07:06.391 10:07:25 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69731 00:07:06.391 10:07:25 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:06.391 10:07:25 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69731 /var/tmp/spdk2.sock 00:07:06.391 10:07:25 -- common/autotest_common.sh@650 -- # local es=0 00:07:06.391 10:07:25 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 69731 /var/tmp/spdk2.sock 00:07:06.391 10:07:25 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:06.391 10:07:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.391 10:07:25 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:06.391 10:07:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.391 10:07:25 -- common/autotest_common.sh@653 -- # waitforlisten 69731 /var/tmp/spdk2.sock 00:07:06.391 10:07:25 -- common/autotest_common.sh@829 -- # '[' -z 69731 ']' 00:07:06.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.391 10:07:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.391 10:07:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.391 10:07:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.391 10:07:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.391 10:07:25 -- common/autotest_common.sh@10 -- # set +x 00:07:06.391 [2024-11-19 10:07:25.667786] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:06.391 [2024-11-19 10:07:25.667914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69731 ] 00:07:06.391 [2024-11-19 10:07:25.811990] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69701 has claimed it. 00:07:06.391 [2024-11-19 10:07:25.812070] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:06.959 ERROR: process (pid: 69731) is no longer running 00:07:06.959 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (69731) - No such process 00:07:06.959 10:07:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.959 10:07:26 -- common/autotest_common.sh@862 -- # return 1 00:07:06.960 10:07:26 -- common/autotest_common.sh@653 -- # es=1 00:07:06.960 10:07:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.960 10:07:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.960 10:07:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.960 10:07:26 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:06.960 10:07:26 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:06.960 10:07:26 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:06.960 10:07:26 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:06.960 10:07:26 -- event/cpu_locks.sh@141 -- # killprocess 69701 00:07:06.960 10:07:26 -- common/autotest_common.sh@936 -- # '[' -z 69701 ']' 00:07:06.960 10:07:26 -- common/autotest_common.sh@940 -- # kill -0 69701 00:07:06.960 10:07:26 -- common/autotest_common.sh@941 -- # uname 00:07:06.960 10:07:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:06.960 10:07:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69701 00:07:06.960 10:07:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:06.960 10:07:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:06.960 10:07:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69701' 00:07:06.960 killing process with pid 69701 00:07:06.960 10:07:26 -- common/autotest_common.sh@955 -- # kill 69701 00:07:06.960 10:07:26 -- common/autotest_common.sh@960 -- # wait 69701 00:07:07.219 00:07:07.219 real 0m2.130s 00:07:07.219 user 0m6.230s 00:07:07.219 sys 0m0.333s 00:07:07.219 10:07:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.219 10:07:26 -- common/autotest_common.sh@10 -- # set +x 00:07:07.219 ************************************ 00:07:07.219 END TEST locking_overlapped_coremask 00:07:07.219 ************************************ 00:07:07.219 10:07:26 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:07.219 10:07:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:07.219 10:07:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.219 10:07:26 -- common/autotest_common.sh@10 -- # set +x 00:07:07.219 ************************************ 00:07:07.219 START TEST locking_overlapped_coremask_via_rpc 00:07:07.219 ************************************ 00:07:07.219 10:07:26 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:07:07.219 10:07:26 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=69777 00:07:07.219 10:07:26 -- event/cpu_locks.sh@149 -- # waitforlisten 69777 /var/tmp/spdk.sock 00:07:07.219 10:07:26 -- common/autotest_common.sh@829 -- # '[' -z 69777 ']' 00:07:07.219 10:07:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.219 10:07:26 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:07.219 10:07:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.219 10:07:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.219 10:07:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.219 10:07:26 -- common/autotest_common.sh@10 -- # set +x 00:07:07.219 [2024-11-19 10:07:26.745342] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:07.219 [2024-11-19 10:07:26.745449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69777 ] 00:07:07.478 [2024-11-19 10:07:26.882190] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.478 [2024-11-19 10:07:26.882257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.478 [2024-11-19 10:07:26.923585] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:07.478 [2024-11-19 10:07:26.924095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.478 [2024-11-19 10:07:26.924170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.478 [2024-11-19 10:07:26.924175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.413 10:07:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.413 10:07:27 -- common/autotest_common.sh@862 -- # return 0 00:07:08.413 10:07:27 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=69807 00:07:08.413 10:07:27 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:08.413 10:07:27 -- event/cpu_locks.sh@153 -- # waitforlisten 69807 /var/tmp/spdk2.sock 00:07:08.413 10:07:27 -- common/autotest_common.sh@829 -- # '[' -z 69807 ']' 00:07:08.413 10:07:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.413 10:07:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.413 10:07:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.413 10:07:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.413 10:07:27 -- common/autotest_common.sh@10 -- # set +x 00:07:08.413 [2024-11-19 10:07:27.809044] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:08.413 [2024-11-19 10:07:27.809364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69807 ] 00:07:08.413 [2024-11-19 10:07:27.953506] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.413 [2024-11-19 10:07:27.953564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.670 [2024-11-19 10:07:28.025967] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:08.670 [2024-11-19 10:07:28.026187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.670 [2024-11-19 10:07:28.026307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:08.670 [2024-11-19 10:07:28.026308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.604 10:07:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.604 10:07:28 -- common/autotest_common.sh@862 -- # return 0 00:07:09.604 10:07:28 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:09.604 10:07:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.604 10:07:28 -- common/autotest_common.sh@10 -- # set +x 00:07:09.604 10:07:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.604 10:07:28 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.604 10:07:28 -- common/autotest_common.sh@650 -- # local es=0 00:07:09.604 10:07:28 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.604 10:07:28 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:09.604 10:07:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.604 10:07:28 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:09.604 10:07:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.604 10:07:28 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.604 10:07:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.604 10:07:28 -- common/autotest_common.sh@10 -- # set +x 00:07:09.604 [2024-11-19 10:07:28.913977] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69777 has claimed it. 00:07:09.604 2024/11/19 10:07:28 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:07:09.604 request: 00:07:09.604 { 00:07:09.604 "method": "framework_enable_cpumask_locks", 00:07:09.604 "params": {} 00:07:09.604 } 00:07:09.604 Got JSON-RPC error response 00:07:09.604 GoRPCClient: error on JSON-RPC call 00:07:09.604 10:07:28 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:09.604 10:07:28 -- common/autotest_common.sh@653 -- # es=1 00:07:09.604 10:07:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.604 10:07:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:09.604 10:07:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.604 10:07:28 -- event/cpu_locks.sh@158 -- # waitforlisten 69777 /var/tmp/spdk.sock 00:07:09.604 10:07:28 -- common/autotest_common.sh@829 -- # '[' -z 69777 ']' 00:07:09.604 10:07:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.604 10:07:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.604 10:07:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.604 10:07:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.604 10:07:28 -- common/autotest_common.sh@10 -- # set +x 00:07:09.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.861 10:07:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.861 10:07:29 -- common/autotest_common.sh@862 -- # return 0 00:07:09.861 10:07:29 -- event/cpu_locks.sh@159 -- # waitforlisten 69807 /var/tmp/spdk2.sock 00:07:09.861 10:07:29 -- common/autotest_common.sh@829 -- # '[' -z 69807 ']' 00:07:09.861 10:07:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.861 10:07:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.861 10:07:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.861 10:07:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.861 10:07:29 -- common/autotest_common.sh@10 -- # set +x 00:07:10.119 ************************************ 00:07:10.119 END TEST locking_overlapped_coremask_via_rpc 00:07:10.119 ************************************ 00:07:10.119 10:07:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.119 10:07:29 -- common/autotest_common.sh@862 -- # return 0 00:07:10.119 10:07:29 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:10.119 10:07:29 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:10.119 10:07:29 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:10.119 10:07:29 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:10.119 00:07:10.119 real 0m2.855s 00:07:10.119 user 0m1.545s 00:07:10.119 sys 0m0.226s 00:07:10.119 10:07:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.119 10:07:29 -- common/autotest_common.sh@10 -- # set +x 00:07:10.119 10:07:29 -- event/cpu_locks.sh@174 -- # cleanup 00:07:10.119 10:07:29 -- event/cpu_locks.sh@15 -- # [[ -z 69777 ]] 00:07:10.119 10:07:29 -- event/cpu_locks.sh@15 -- # killprocess 69777 00:07:10.119 10:07:29 -- common/autotest_common.sh@936 -- # '[' -z 69777 ']' 00:07:10.119 10:07:29 -- common/autotest_common.sh@940 -- # kill -0 69777 00:07:10.119 10:07:29 -- common/autotest_common.sh@941 -- # uname 00:07:10.119 10:07:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:10.119 10:07:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69777 00:07:10.119 killing process with pid 69777 00:07:10.119 10:07:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:10.119 10:07:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:10.119 10:07:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69777' 00:07:10.119 10:07:29 -- common/autotest_common.sh@955 -- # kill 69777 00:07:10.119 10:07:29 -- common/autotest_common.sh@960 -- # wait 69777 00:07:10.377 10:07:29 -- event/cpu_locks.sh@16 -- # [[ -z 69807 ]] 00:07:10.377 10:07:29 -- event/cpu_locks.sh@16 -- # killprocess 69807 00:07:10.377 10:07:29 -- common/autotest_common.sh@936 -- # '[' -z 69807 ']' 00:07:10.377 10:07:29 -- common/autotest_common.sh@940 -- # kill -0 69807 00:07:10.377 10:07:29 -- common/autotest_common.sh@941 -- # uname 00:07:10.377 10:07:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:10.377 10:07:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69807 00:07:10.377 killing process with pid 69807 00:07:10.377 10:07:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:10.377 10:07:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:10.377 10:07:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69807' 00:07:10.377 10:07:29 -- common/autotest_common.sh@955 -- # kill 69807 00:07:10.377 10:07:29 -- common/autotest_common.sh@960 -- # wait 69807 00:07:10.635 10:07:30 -- event/cpu_locks.sh@18 -- # rm -f 00:07:10.635 Process with pid 69777 is not found 00:07:10.635 Process with pid 69807 is not found 00:07:10.635 10:07:30 -- event/cpu_locks.sh@1 -- # cleanup 00:07:10.635 10:07:30 -- event/cpu_locks.sh@15 -- # [[ -z 69777 ]] 00:07:10.635 10:07:30 -- event/cpu_locks.sh@15 -- # killprocess 69777 00:07:10.635 10:07:30 -- common/autotest_common.sh@936 -- # '[' -z 69777 ']' 00:07:10.635 10:07:30 -- common/autotest_common.sh@940 -- # kill -0 69777 00:07:10.635 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (69777) - No such process 00:07:10.635 10:07:30 -- common/autotest_common.sh@963 -- # echo 'Process with pid 69777 is not found' 00:07:10.635 10:07:30 -- event/cpu_locks.sh@16 -- # [[ -z 69807 ]] 00:07:10.635 10:07:30 -- event/cpu_locks.sh@16 -- # killprocess 69807 00:07:10.635 10:07:30 -- common/autotest_common.sh@936 -- # '[' -z 69807 ']' 00:07:10.635 10:07:30 -- common/autotest_common.sh@940 -- # kill -0 69807 00:07:10.635 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (69807) - No such process 00:07:10.635 10:07:30 -- common/autotest_common.sh@963 -- # echo 'Process with pid 69807 is not found' 00:07:10.635 10:07:30 -- event/cpu_locks.sh@18 -- # rm -f 00:07:10.635 ************************************ 00:07:10.635 END TEST cpu_locks 00:07:10.635 ************************************ 00:07:10.635 00:07:10.635 real 0m20.080s 00:07:10.635 user 0m37.745s 00:07:10.635 sys 0m4.737s 00:07:10.635 10:07:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.635 10:07:30 -- common/autotest_common.sh@10 -- # set +x 00:07:10.635 ************************************ 00:07:10.635 END TEST event 00:07:10.635 ************************************ 00:07:10.635 00:07:10.635 real 0m47.231s 00:07:10.635 user 1m33.889s 00:07:10.635 sys 0m8.304s 00:07:10.635 10:07:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.635 10:07:30 -- common/autotest_common.sh@10 -- # set +x 00:07:10.894 10:07:30 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:10.894 10:07:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.894 10:07:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.894 10:07:30 -- common/autotest_common.sh@10 -- # set +x 00:07:10.894 ************************************ 00:07:10.894 START TEST thread 00:07:10.894 ************************************ 00:07:10.894 10:07:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:10.894 * Looking for test storage... 00:07:10.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:10.894 10:07:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:10.894 10:07:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:10.894 10:07:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:10.894 10:07:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:10.894 10:07:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:10.894 10:07:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:10.894 10:07:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:10.894 10:07:30 -- scripts/common.sh@335 -- # IFS=.-: 00:07:10.894 10:07:30 -- scripts/common.sh@335 -- # read -ra ver1 00:07:10.894 10:07:30 -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.894 10:07:30 -- scripts/common.sh@336 -- # read -ra ver2 00:07:10.894 10:07:30 -- scripts/common.sh@337 -- # local 'op=<' 00:07:10.894 10:07:30 -- scripts/common.sh@339 -- # ver1_l=2 00:07:10.894 10:07:30 -- scripts/common.sh@340 -- # ver2_l=1 00:07:10.894 10:07:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:10.894 10:07:30 -- scripts/common.sh@343 -- # case "$op" in 00:07:10.894 10:07:30 -- scripts/common.sh@344 -- # : 1 00:07:10.894 10:07:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:10.894 10:07:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.894 10:07:30 -- scripts/common.sh@364 -- # decimal 1 00:07:10.894 10:07:30 -- scripts/common.sh@352 -- # local d=1 00:07:10.894 10:07:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.894 10:07:30 -- scripts/common.sh@354 -- # echo 1 00:07:10.894 10:07:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:10.894 10:07:30 -- scripts/common.sh@365 -- # decimal 2 00:07:10.894 10:07:30 -- scripts/common.sh@352 -- # local d=2 00:07:10.894 10:07:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.894 10:07:30 -- scripts/common.sh@354 -- # echo 2 00:07:10.894 10:07:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:10.894 10:07:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:10.894 10:07:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:10.894 10:07:30 -- scripts/common.sh@367 -- # return 0 00:07:10.894 10:07:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.894 10:07:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:10.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.894 --rc genhtml_branch_coverage=1 00:07:10.894 --rc genhtml_function_coverage=1 00:07:10.894 --rc genhtml_legend=1 00:07:10.894 --rc geninfo_all_blocks=1 00:07:10.894 --rc geninfo_unexecuted_blocks=1 00:07:10.894 00:07:10.894 ' 00:07:10.894 10:07:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:10.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.894 --rc genhtml_branch_coverage=1 00:07:10.894 --rc genhtml_function_coverage=1 00:07:10.894 --rc genhtml_legend=1 00:07:10.894 --rc geninfo_all_blocks=1 00:07:10.894 --rc geninfo_unexecuted_blocks=1 00:07:10.894 00:07:10.894 ' 00:07:10.894 10:07:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:10.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.894 --rc genhtml_branch_coverage=1 00:07:10.894 --rc genhtml_function_coverage=1 00:07:10.894 --rc genhtml_legend=1 00:07:10.894 --rc geninfo_all_blocks=1 00:07:10.894 --rc geninfo_unexecuted_blocks=1 00:07:10.894 00:07:10.894 ' 00:07:10.894 10:07:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:10.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.894 --rc genhtml_branch_coverage=1 00:07:10.894 --rc genhtml_function_coverage=1 00:07:10.894 --rc genhtml_legend=1 00:07:10.894 --rc geninfo_all_blocks=1 00:07:10.894 --rc geninfo_unexecuted_blocks=1 00:07:10.894 00:07:10.894 ' 00:07:10.894 10:07:30 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:10.894 10:07:30 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:10.894 10:07:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.894 10:07:30 -- common/autotest_common.sh@10 -- # set +x 00:07:10.894 ************************************ 00:07:10.894 START TEST thread_poller_perf 00:07:10.894 ************************************ 00:07:10.894 10:07:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:10.894 [2024-11-19 10:07:30.415876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:10.894 [2024-11-19 10:07:30.416108] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69966 ] 00:07:11.153 [2024-11-19 10:07:30.551594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.153 [2024-11-19 10:07:30.587076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.153 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:12.529 [2024-11-19T10:07:32.075Z] ====================================== 00:07:12.529 [2024-11-19T10:07:32.075Z] busy:2208588216 (cyc) 00:07:12.529 [2024-11-19T10:07:32.075Z] total_run_count: 278000 00:07:12.529 [2024-11-19T10:07:32.075Z] tsc_hz: 2200000000 (cyc) 00:07:12.529 [2024-11-19T10:07:32.075Z] ====================================== 00:07:12.529 [2024-11-19T10:07:32.075Z] poller_cost: 7944 (cyc), 3610 (nsec) 00:07:12.529 00:07:12.529 real 0m1.253s 00:07:12.529 user 0m1.100s 00:07:12.529 sys 0m0.044s 00:07:12.529 10:07:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.529 ************************************ 00:07:12.529 END TEST thread_poller_perf 00:07:12.529 ************************************ 00:07:12.529 10:07:31 -- common/autotest_common.sh@10 -- # set +x 00:07:12.529 10:07:31 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:12.529 10:07:31 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:12.529 10:07:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.529 10:07:31 -- common/autotest_common.sh@10 -- # set +x 00:07:12.529 ************************************ 00:07:12.529 START TEST thread_poller_perf 00:07:12.529 ************************************ 00:07:12.529 10:07:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:12.529 [2024-11-19 10:07:31.722642] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:12.529 [2024-11-19 10:07:31.722749] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69996 ] 00:07:12.529 [2024-11-19 10:07:31.862165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.529 [2024-11-19 10:07:31.902301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.529 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:13.464 [2024-11-19T10:07:33.010Z] ====================================== 00:07:13.464 [2024-11-19T10:07:33.010Z] busy:2203057646 (cyc) 00:07:13.464 [2024-11-19T10:07:33.010Z] total_run_count: 3765000 00:07:13.464 [2024-11-19T10:07:33.010Z] tsc_hz: 2200000000 (cyc) 00:07:13.464 [2024-11-19T10:07:33.010Z] ====================================== 00:07:13.464 [2024-11-19T10:07:33.010Z] poller_cost: 585 (cyc), 265 (nsec) 00:07:13.464 00:07:13.464 real 0m1.257s 00:07:13.464 user 0m1.102s 00:07:13.464 sys 0m0.046s 00:07:13.464 10:07:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.464 10:07:32 -- common/autotest_common.sh@10 -- # set +x 00:07:13.464 ************************************ 00:07:13.464 END TEST thread_poller_perf 00:07:13.464 ************************************ 00:07:13.464 10:07:33 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:13.464 00:07:13.464 real 0m2.786s 00:07:13.464 user 0m2.335s 00:07:13.464 sys 0m0.233s 00:07:13.464 10:07:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.464 10:07:33 -- common/autotest_common.sh@10 -- # set +x 00:07:13.464 ************************************ 00:07:13.464 END TEST thread 00:07:13.464 ************************************ 00:07:13.724 10:07:33 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:13.724 10:07:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.724 10:07:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.724 10:07:33 -- common/autotest_common.sh@10 -- # set +x 00:07:13.724 ************************************ 00:07:13.724 START TEST accel 00:07:13.724 ************************************ 00:07:13.724 10:07:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:13.724 * Looking for test storage... 00:07:13.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:13.724 10:07:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:13.724 10:07:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:13.724 10:07:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:13.724 10:07:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:13.724 10:07:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:13.724 10:07:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:13.724 10:07:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:13.724 10:07:33 -- scripts/common.sh@335 -- # IFS=.-: 00:07:13.724 10:07:33 -- scripts/common.sh@335 -- # read -ra ver1 00:07:13.724 10:07:33 -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.724 10:07:33 -- scripts/common.sh@336 -- # read -ra ver2 00:07:13.724 10:07:33 -- scripts/common.sh@337 -- # local 'op=<' 00:07:13.724 10:07:33 -- scripts/common.sh@339 -- # ver1_l=2 00:07:13.724 10:07:33 -- scripts/common.sh@340 -- # ver2_l=1 00:07:13.724 10:07:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:13.724 10:07:33 -- scripts/common.sh@343 -- # case "$op" in 00:07:13.724 10:07:33 -- scripts/common.sh@344 -- # : 1 00:07:13.724 10:07:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:13.724 10:07:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.724 10:07:33 -- scripts/common.sh@364 -- # decimal 1 00:07:13.724 10:07:33 -- scripts/common.sh@352 -- # local d=1 00:07:13.724 10:07:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.724 10:07:33 -- scripts/common.sh@354 -- # echo 1 00:07:13.724 10:07:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:13.724 10:07:33 -- scripts/common.sh@365 -- # decimal 2 00:07:13.724 10:07:33 -- scripts/common.sh@352 -- # local d=2 00:07:13.724 10:07:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.724 10:07:33 -- scripts/common.sh@354 -- # echo 2 00:07:13.724 10:07:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:13.724 10:07:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:13.724 10:07:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:13.724 10:07:33 -- scripts/common.sh@367 -- # return 0 00:07:13.724 10:07:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.724 10:07:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.724 --rc genhtml_branch_coverage=1 00:07:13.724 --rc genhtml_function_coverage=1 00:07:13.724 --rc genhtml_legend=1 00:07:13.724 --rc geninfo_all_blocks=1 00:07:13.724 --rc geninfo_unexecuted_blocks=1 00:07:13.724 00:07:13.724 ' 00:07:13.724 10:07:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.724 --rc genhtml_branch_coverage=1 00:07:13.724 --rc genhtml_function_coverage=1 00:07:13.724 --rc genhtml_legend=1 00:07:13.724 --rc geninfo_all_blocks=1 00:07:13.724 --rc geninfo_unexecuted_blocks=1 00:07:13.724 00:07:13.724 ' 00:07:13.725 10:07:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:13.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.725 --rc genhtml_branch_coverage=1 00:07:13.725 --rc genhtml_function_coverage=1 00:07:13.725 --rc genhtml_legend=1 00:07:13.725 --rc geninfo_all_blocks=1 00:07:13.725 --rc geninfo_unexecuted_blocks=1 00:07:13.725 00:07:13.725 ' 00:07:13.725 10:07:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:13.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.725 --rc genhtml_branch_coverage=1 00:07:13.725 --rc genhtml_function_coverage=1 00:07:13.725 --rc genhtml_legend=1 00:07:13.725 --rc geninfo_all_blocks=1 00:07:13.725 --rc geninfo_unexecuted_blocks=1 00:07:13.725 00:07:13.725 ' 00:07:13.725 10:07:33 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:13.725 10:07:33 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:13.725 10:07:33 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:13.725 10:07:33 -- accel/accel.sh@59 -- # spdk_tgt_pid=70078 00:07:13.725 10:07:33 -- accel/accel.sh@60 -- # waitforlisten 70078 00:07:13.725 10:07:33 -- common/autotest_common.sh@829 -- # '[' -z 70078 ']' 00:07:13.725 10:07:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.725 10:07:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.725 10:07:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.725 10:07:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.725 10:07:33 -- accel/accel.sh@58 -- # build_accel_config 00:07:13.725 10:07:33 -- common/autotest_common.sh@10 -- # set +x 00:07:13.725 10:07:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.725 10:07:33 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:13.725 10:07:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.725 10:07:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.725 10:07:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.725 10:07:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.725 10:07:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.725 10:07:33 -- accel/accel.sh@42 -- # jq -r . 00:07:13.982 [2024-11-19 10:07:33.290957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:13.982 [2024-11-19 10:07:33.291052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70078 ] 00:07:13.982 [2024-11-19 10:07:33.432317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.982 [2024-11-19 10:07:33.470694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:13.982 [2024-11-19 10:07:33.470888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.917 10:07:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.917 10:07:34 -- common/autotest_common.sh@862 -- # return 0 00:07:14.917 10:07:34 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:14.917 10:07:34 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:14.917 10:07:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.917 10:07:34 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:14.917 10:07:34 -- common/autotest_common.sh@10 -- # set +x 00:07:14.917 10:07:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.917 10:07:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.917 10:07:34 -- accel/accel.sh@64 -- # IFS== 00:07:14.917 10:07:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.917 10:07:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.917 10:07:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.917 10:07:34 -- accel/accel.sh@64 -- # IFS== 00:07:14.917 10:07:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.917 10:07:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.917 10:07:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.917 10:07:34 -- accel/accel.sh@64 -- # IFS== 00:07:14.917 10:07:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.917 10:07:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.917 10:07:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.917 10:07:34 -- accel/accel.sh@64 -- # IFS== 00:07:14.917 10:07:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.917 10:07:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.918 10:07:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # IFS== 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.918 10:07:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.918 10:07:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # IFS== 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.918 10:07:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.918 10:07:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # IFS== 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.918 10:07:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.918 10:07:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # IFS== 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.918 10:07:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.918 10:07:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # IFS== 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.918 10:07:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.918 10:07:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # IFS== 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.918 10:07:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.918 10:07:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # IFS== 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.918 10:07:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.918 10:07:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # IFS== 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.918 10:07:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.918 10:07:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # IFS== 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.918 10:07:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.918 10:07:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # IFS== 00:07:14.918 10:07:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.918 10:07:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.918 10:07:34 -- accel/accel.sh@67 -- # killprocess 70078 00:07:14.918 10:07:34 -- common/autotest_common.sh@936 -- # '[' -z 70078 ']' 00:07:14.918 10:07:34 -- common/autotest_common.sh@940 -- # kill -0 70078 00:07:14.918 10:07:34 -- common/autotest_common.sh@941 -- # uname 00:07:14.918 10:07:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:14.918 10:07:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70078 00:07:14.918 killing process with pid 70078 00:07:14.918 10:07:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:14.918 10:07:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:14.918 10:07:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70078' 00:07:14.918 10:07:34 -- common/autotest_common.sh@955 -- # kill 70078 00:07:14.918 10:07:34 -- common/autotest_common.sh@960 -- # wait 70078 00:07:15.177 10:07:34 -- accel/accel.sh@68 -- # trap - ERR 00:07:15.177 10:07:34 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:15.177 10:07:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:15.177 10:07:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.177 10:07:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.177 10:07:34 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:07:15.177 10:07:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:15.177 10:07:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.177 10:07:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.177 10:07:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.177 10:07:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.177 10:07:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.177 10:07:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.177 10:07:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.177 10:07:34 -- accel/accel.sh@42 -- # jq -r . 00:07:15.177 10:07:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.177 10:07:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.177 10:07:34 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:15.177 10:07:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:15.177 10:07:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.177 10:07:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.177 ************************************ 00:07:15.177 START TEST accel_missing_filename 00:07:15.177 ************************************ 00:07:15.177 10:07:34 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:07:15.177 10:07:34 -- common/autotest_common.sh@650 -- # local es=0 00:07:15.177 10:07:34 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:15.177 10:07:34 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:15.177 10:07:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.177 10:07:34 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:15.177 10:07:34 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.178 10:07:34 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:07:15.178 10:07:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:15.178 10:07:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.178 10:07:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.178 10:07:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.178 10:07:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.178 10:07:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.178 10:07:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.178 10:07:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.178 10:07:34 -- accel/accel.sh@42 -- # jq -r . 00:07:15.178 [2024-11-19 10:07:34.682742] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:15.178 [2024-11-19 10:07:34.683351] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70147 ] 00:07:15.436 [2024-11-19 10:07:34.821070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.436 [2024-11-19 10:07:34.854608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.436 [2024-11-19 10:07:34.883974] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.436 [2024-11-19 10:07:34.924090] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:15.695 A filename is required. 00:07:15.695 10:07:34 -- common/autotest_common.sh@653 -- # es=234 00:07:15.695 10:07:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.695 10:07:34 -- common/autotest_common.sh@662 -- # es=106 00:07:15.695 10:07:34 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:15.695 10:07:34 -- common/autotest_common.sh@670 -- # es=1 00:07:15.695 10:07:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.695 00:07:15.695 real 0m0.335s 00:07:15.695 user 0m0.217s 00:07:15.695 sys 0m0.074s 00:07:15.695 10:07:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.695 ************************************ 00:07:15.695 END TEST accel_missing_filename 00:07:15.695 ************************************ 00:07:15.695 10:07:34 -- common/autotest_common.sh@10 -- # set +x 00:07:15.695 10:07:35 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.695 10:07:35 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:15.695 10:07:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.695 10:07:35 -- common/autotest_common.sh@10 -- # set +x 00:07:15.695 ************************************ 00:07:15.695 START TEST accel_compress_verify 00:07:15.695 ************************************ 00:07:15.695 10:07:35 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.695 10:07:35 -- common/autotest_common.sh@650 -- # local es=0 00:07:15.695 10:07:35 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.695 10:07:35 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:15.695 10:07:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.695 10:07:35 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:15.695 10:07:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.695 10:07:35 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.695 10:07:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.695 10:07:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.695 10:07:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.695 10:07:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.695 10:07:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.695 10:07:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.695 10:07:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.695 10:07:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.695 10:07:35 -- accel/accel.sh@42 -- # jq -r . 00:07:15.695 [2024-11-19 10:07:35.057716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:15.695 [2024-11-19 10:07:35.057791] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70166 ] 00:07:15.695 [2024-11-19 10:07:35.188141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.695 [2024-11-19 10:07:35.221975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.954 [2024-11-19 10:07:35.253853] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.954 [2024-11-19 10:07:35.295041] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:15.954 00:07:15.954 Compression does not support the verify option, aborting. 00:07:15.954 10:07:35 -- common/autotest_common.sh@653 -- # es=161 00:07:15.954 10:07:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.954 10:07:35 -- common/autotest_common.sh@662 -- # es=33 00:07:15.954 10:07:35 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:15.954 10:07:35 -- common/autotest_common.sh@670 -- # es=1 00:07:15.954 10:07:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.954 ************************************ 00:07:15.954 END TEST accel_compress_verify 00:07:15.954 ************************************ 00:07:15.954 00:07:15.954 real 0m0.310s 00:07:15.954 user 0m0.210s 00:07:15.954 sys 0m0.065s 00:07:15.954 10:07:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.954 10:07:35 -- common/autotest_common.sh@10 -- # set +x 00:07:15.954 10:07:35 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:15.954 10:07:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:15.954 10:07:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.954 10:07:35 -- common/autotest_common.sh@10 -- # set +x 00:07:15.954 ************************************ 00:07:15.954 START TEST accel_wrong_workload 00:07:15.954 ************************************ 00:07:15.955 10:07:35 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:07:15.955 10:07:35 -- common/autotest_common.sh@650 -- # local es=0 00:07:15.955 10:07:35 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:15.955 10:07:35 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:15.955 10:07:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.955 10:07:35 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:15.955 10:07:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.955 10:07:35 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:07:15.955 10:07:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:15.955 10:07:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.955 10:07:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.955 10:07:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.955 10:07:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.955 10:07:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.955 10:07:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.955 10:07:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.955 10:07:35 -- accel/accel.sh@42 -- # jq -r . 00:07:15.955 Unsupported workload type: foobar 00:07:15.955 [2024-11-19 10:07:35.422105] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:15.955 accel_perf options: 00:07:15.955 [-h help message] 00:07:15.955 [-q queue depth per core] 00:07:15.955 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:15.955 [-T number of threads per core 00:07:15.955 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:15.955 [-t time in seconds] 00:07:15.955 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:15.955 [ dif_verify, , dif_generate, dif_generate_copy 00:07:15.955 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:15.955 [-l for compress/decompress workloads, name of uncompressed input file 00:07:15.955 [-S for crc32c workload, use this seed value (default 0) 00:07:15.955 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:15.955 [-f for fill workload, use this BYTE value (default 255) 00:07:15.955 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:15.955 [-y verify result if this switch is on] 00:07:15.955 [-a tasks to allocate per core (default: same value as -q)] 00:07:15.955 Can be used to spread operations across a wider range of memory. 00:07:15.955 10:07:35 -- common/autotest_common.sh@653 -- # es=1 00:07:15.955 10:07:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.955 10:07:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.955 10:07:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.955 00:07:15.955 real 0m0.030s 00:07:15.955 user 0m0.020s 00:07:15.955 sys 0m0.010s 00:07:15.955 10:07:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.955 10:07:35 -- common/autotest_common.sh@10 -- # set +x 00:07:15.955 ************************************ 00:07:15.955 END TEST accel_wrong_workload 00:07:15.955 ************************************ 00:07:15.955 10:07:35 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:15.955 10:07:35 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:15.955 10:07:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.955 10:07:35 -- common/autotest_common.sh@10 -- # set +x 00:07:15.955 ************************************ 00:07:15.955 START TEST accel_negative_buffers 00:07:15.955 ************************************ 00:07:15.955 10:07:35 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:15.955 10:07:35 -- common/autotest_common.sh@650 -- # local es=0 00:07:15.955 10:07:35 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:15.955 10:07:35 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:15.955 10:07:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.955 10:07:35 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:15.955 10:07:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.955 10:07:35 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:07:15.955 10:07:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:15.955 10:07:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.955 10:07:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.955 10:07:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.955 10:07:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.955 10:07:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.955 10:07:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.955 10:07:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.955 10:07:35 -- accel/accel.sh@42 -- # jq -r . 00:07:15.955 -x option must be non-negative. 00:07:15.955 [2024-11-19 10:07:35.499533] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:16.214 accel_perf options: 00:07:16.214 [-h help message] 00:07:16.214 [-q queue depth per core] 00:07:16.214 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:16.214 [-T number of threads per core 00:07:16.214 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:16.214 [-t time in seconds] 00:07:16.214 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:16.214 [ dif_verify, , dif_generate, dif_generate_copy 00:07:16.214 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:16.214 [-l for compress/decompress workloads, name of uncompressed input file 00:07:16.214 [-S for crc32c workload, use this seed value (default 0) 00:07:16.214 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:16.214 [-f for fill workload, use this BYTE value (default 255) 00:07:16.214 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:16.214 [-y verify result if this switch is on] 00:07:16.214 [-a tasks to allocate per core (default: same value as -q)] 00:07:16.214 Can be used to spread operations across a wider range of memory. 00:07:16.214 10:07:35 -- common/autotest_common.sh@653 -- # es=1 00:07:16.214 10:07:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:16.214 10:07:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:16.214 10:07:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:16.214 00:07:16.214 real 0m0.029s 00:07:16.214 user 0m0.017s 00:07:16.214 sys 0m0.012s 00:07:16.214 10:07:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.214 10:07:35 -- common/autotest_common.sh@10 -- # set +x 00:07:16.214 ************************************ 00:07:16.214 END TEST accel_negative_buffers 00:07:16.214 ************************************ 00:07:16.214 10:07:35 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:16.214 10:07:35 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:16.214 10:07:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.214 10:07:35 -- common/autotest_common.sh@10 -- # set +x 00:07:16.214 ************************************ 00:07:16.214 START TEST accel_crc32c 00:07:16.214 ************************************ 00:07:16.214 10:07:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:16.214 10:07:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.214 10:07:35 -- accel/accel.sh@17 -- # local accel_module 00:07:16.214 10:07:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:16.214 10:07:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:16.215 10:07:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.215 10:07:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.215 10:07:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.215 10:07:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.215 10:07:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.215 10:07:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.215 10:07:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.215 10:07:35 -- accel/accel.sh@42 -- # jq -r . 00:07:16.215 [2024-11-19 10:07:35.573178] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:16.215 [2024-11-19 10:07:35.573269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70225 ] 00:07:16.215 [2024-11-19 10:07:35.704977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.215 [2024-11-19 10:07:35.738082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.591 10:07:36 -- accel/accel.sh@18 -- # out=' 00:07:17.591 SPDK Configuration: 00:07:17.591 Core mask: 0x1 00:07:17.591 00:07:17.591 Accel Perf Configuration: 00:07:17.591 Workload Type: crc32c 00:07:17.591 CRC-32C seed: 32 00:07:17.591 Transfer size: 4096 bytes 00:07:17.591 Vector count 1 00:07:17.591 Module: software 00:07:17.591 Queue depth: 32 00:07:17.591 Allocate depth: 32 00:07:17.591 # threads/core: 1 00:07:17.591 Run time: 1 seconds 00:07:17.591 Verify: Yes 00:07:17.591 00:07:17.591 Running for 1 seconds... 00:07:17.591 00:07:17.591 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.591 ------------------------------------------------------------------------------------ 00:07:17.591 0,0 428448/s 1673 MiB/s 0 0 00:07:17.591 ==================================================================================== 00:07:17.591 Total 428448/s 1673 MiB/s 0 0' 00:07:17.591 10:07:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.591 10:07:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.591 10:07:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:17.591 10:07:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:17.591 10:07:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.591 10:07:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.591 10:07:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.591 10:07:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.591 10:07:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.591 10:07:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.591 10:07:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.591 10:07:36 -- accel/accel.sh@42 -- # jq -r . 00:07:17.591 [2024-11-19 10:07:36.883179] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:17.591 [2024-11-19 10:07:36.883769] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70244 ] 00:07:17.591 [2024-11-19 10:07:37.021253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.591 [2024-11-19 10:07:37.055719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.591 10:07:37 -- accel/accel.sh@21 -- # val= 00:07:17.591 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.591 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.591 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.591 10:07:37 -- accel/accel.sh@21 -- # val= 00:07:17.591 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.591 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.591 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.591 10:07:37 -- accel/accel.sh@21 -- # val=0x1 00:07:17.591 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.591 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.591 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.591 10:07:37 -- accel/accel.sh@21 -- # val= 00:07:17.591 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.591 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.591 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.591 10:07:37 -- accel/accel.sh@21 -- # val= 00:07:17.591 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.591 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.591 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.591 10:07:37 -- accel/accel.sh@21 -- # val=crc32c 00:07:17.591 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.591 10:07:37 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:17.591 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.591 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.591 10:07:37 -- accel/accel.sh@21 -- # val=32 00:07:17.592 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.592 10:07:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.592 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.592 10:07:37 -- accel/accel.sh@21 -- # val= 00:07:17.592 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.592 10:07:37 -- accel/accel.sh@21 -- # val=software 00:07:17.592 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.592 10:07:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.592 10:07:37 -- accel/accel.sh@21 -- # val=32 00:07:17.592 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.592 10:07:37 -- accel/accel.sh@21 -- # val=32 00:07:17.592 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.592 10:07:37 -- accel/accel.sh@21 -- # val=1 00:07:17.592 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.592 10:07:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.592 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.592 10:07:37 -- accel/accel.sh@21 -- # val=Yes 00:07:17.592 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.592 10:07:37 -- accel/accel.sh@21 -- # val= 00:07:17.592 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.592 10:07:37 -- accel/accel.sh@21 -- # val= 00:07:17.592 10:07:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.592 10:07:37 -- accel/accel.sh@20 -- # read -r var val 00:07:18.966 10:07:38 -- accel/accel.sh@21 -- # val= 00:07:18.966 10:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.966 10:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:18.966 10:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:18.966 10:07:38 -- accel/accel.sh@21 -- # val= 00:07:18.966 10:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.966 10:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:18.966 10:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:18.966 10:07:38 -- accel/accel.sh@21 -- # val= 00:07:18.966 10:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.966 10:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:18.966 10:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:18.966 10:07:38 -- accel/accel.sh@21 -- # val= 00:07:18.966 10:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.966 10:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:18.966 10:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:18.966 10:07:38 -- accel/accel.sh@21 -- # val= 00:07:18.966 10:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.966 10:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:18.966 10:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:18.966 10:07:38 -- accel/accel.sh@21 -- # val= 00:07:18.966 10:07:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.966 10:07:38 -- accel/accel.sh@20 -- # IFS=: 00:07:18.966 10:07:38 -- accel/accel.sh@20 -- # read -r var val 00:07:18.966 10:07:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:18.966 10:07:38 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:18.966 10:07:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.966 00:07:18.966 real 0m2.632s 00:07:18.966 user 0m2.291s 00:07:18.966 sys 0m0.142s 00:07:18.966 10:07:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.966 10:07:38 -- common/autotest_common.sh@10 -- # set +x 00:07:18.966 ************************************ 00:07:18.966 END TEST accel_crc32c 00:07:18.966 ************************************ 00:07:18.966 10:07:38 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:18.966 10:07:38 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:18.966 10:07:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.966 10:07:38 -- common/autotest_common.sh@10 -- # set +x 00:07:18.966 ************************************ 00:07:18.966 START TEST accel_crc32c_C2 00:07:18.966 ************************************ 00:07:18.966 10:07:38 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:18.966 10:07:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.966 10:07:38 -- accel/accel.sh@17 -- # local accel_module 00:07:18.966 10:07:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:18.966 10:07:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:18.966 10:07:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.966 10:07:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.966 10:07:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.966 10:07:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.966 10:07:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.966 10:07:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.966 10:07:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.966 10:07:38 -- accel/accel.sh@42 -- # jq -r . 00:07:18.966 [2024-11-19 10:07:38.255957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:18.966 [2024-11-19 10:07:38.256045] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70278 ] 00:07:18.966 [2024-11-19 10:07:38.388272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.966 [2024-11-19 10:07:38.431292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.340 10:07:39 -- accel/accel.sh@18 -- # out=' 00:07:20.340 SPDK Configuration: 00:07:20.340 Core mask: 0x1 00:07:20.340 00:07:20.340 Accel Perf Configuration: 00:07:20.340 Workload Type: crc32c 00:07:20.340 CRC-32C seed: 0 00:07:20.340 Transfer size: 4096 bytes 00:07:20.340 Vector count 2 00:07:20.340 Module: software 00:07:20.340 Queue depth: 32 00:07:20.340 Allocate depth: 32 00:07:20.340 # threads/core: 1 00:07:20.340 Run time: 1 seconds 00:07:20.340 Verify: Yes 00:07:20.340 00:07:20.340 Running for 1 seconds... 00:07:20.340 00:07:20.340 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.340 ------------------------------------------------------------------------------------ 00:07:20.340 0,0 330560/s 2582 MiB/s 0 0 00:07:20.340 ==================================================================================== 00:07:20.340 Total 330560/s 1291 MiB/s 0 0' 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.340 10:07:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:20.340 10:07:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.340 10:07:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:20.340 10:07:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.340 10:07:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.340 10:07:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.340 10:07:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.340 10:07:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.340 10:07:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.340 10:07:39 -- accel/accel.sh@42 -- # jq -r . 00:07:20.340 [2024-11-19 10:07:39.600027] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:20.340 [2024-11-19 10:07:39.600132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70293 ] 00:07:20.340 [2024-11-19 10:07:39.738287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.340 [2024-11-19 10:07:39.789196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.340 10:07:39 -- accel/accel.sh@21 -- # val= 00:07:20.340 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.340 10:07:39 -- accel/accel.sh@21 -- # val= 00:07:20.340 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.340 10:07:39 -- accel/accel.sh@21 -- # val=0x1 00:07:20.340 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.340 10:07:39 -- accel/accel.sh@21 -- # val= 00:07:20.340 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.340 10:07:39 -- accel/accel.sh@21 -- # val= 00:07:20.340 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.340 10:07:39 -- accel/accel.sh@21 -- # val=crc32c 00:07:20.340 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.340 10:07:39 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.340 10:07:39 -- accel/accel.sh@21 -- # val=0 00:07:20.340 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.340 10:07:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.340 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.340 10:07:39 -- accel/accel.sh@21 -- # val= 00:07:20.340 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.340 10:07:39 -- accel/accel.sh@21 -- # val=software 00:07:20.340 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.340 10:07:39 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.340 10:07:39 -- accel/accel.sh@21 -- # val=32 00:07:20.340 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.340 10:07:39 -- accel/accel.sh@21 -- # val=32 00:07:20.340 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.340 10:07:39 -- accel/accel.sh@21 -- # val=1 00:07:20.340 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.340 10:07:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.340 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.340 10:07:39 -- accel/accel.sh@21 -- # val=Yes 00:07:20.340 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.340 10:07:39 -- accel/accel.sh@21 -- # val= 00:07:20.340 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.340 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:20.341 10:07:39 -- accel/accel.sh@21 -- # val= 00:07:20.341 10:07:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.341 10:07:39 -- accel/accel.sh@20 -- # IFS=: 00:07:20.341 10:07:39 -- accel/accel.sh@20 -- # read -r var val 00:07:21.715 10:07:40 -- accel/accel.sh@21 -- # val= 00:07:21.715 10:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.715 10:07:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.715 10:07:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.715 10:07:40 -- accel/accel.sh@21 -- # val= 00:07:21.715 10:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.715 10:07:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.715 10:07:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.715 10:07:40 -- accel/accel.sh@21 -- # val= 00:07:21.715 10:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.715 10:07:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.715 10:07:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.715 10:07:40 -- accel/accel.sh@21 -- # val= 00:07:21.715 10:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.715 10:07:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.715 10:07:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.715 10:07:40 -- accel/accel.sh@21 -- # val= 00:07:21.715 10:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.715 10:07:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.715 10:07:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.715 10:07:40 -- accel/accel.sh@21 -- # val= 00:07:21.715 10:07:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.715 10:07:40 -- accel/accel.sh@20 -- # IFS=: 00:07:21.715 10:07:40 -- accel/accel.sh@20 -- # read -r var val 00:07:21.715 10:07:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.715 10:07:40 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:21.715 10:07:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.715 00:07:21.715 real 0m2.686s 00:07:21.715 user 0m2.317s 00:07:21.715 sys 0m0.156s 00:07:21.715 10:07:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.715 10:07:40 -- common/autotest_common.sh@10 -- # set +x 00:07:21.715 ************************************ 00:07:21.715 END TEST accel_crc32c_C2 00:07:21.715 ************************************ 00:07:21.715 10:07:40 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:21.715 10:07:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:21.715 10:07:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.715 10:07:40 -- common/autotest_common.sh@10 -- # set +x 00:07:21.715 ************************************ 00:07:21.715 START TEST accel_copy 00:07:21.715 ************************************ 00:07:21.715 10:07:40 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:07:21.715 10:07:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.715 10:07:40 -- accel/accel.sh@17 -- # local accel_module 00:07:21.715 10:07:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:21.715 10:07:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:21.715 10:07:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.715 10:07:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.715 10:07:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.715 10:07:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.715 10:07:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.715 10:07:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.715 10:07:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.715 10:07:40 -- accel/accel.sh@42 -- # jq -r . 00:07:21.715 [2024-11-19 10:07:40.987219] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:21.715 [2024-11-19 10:07:40.987317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70327 ] 00:07:21.715 [2024-11-19 10:07:41.122483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.715 [2024-11-19 10:07:41.156114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.087 10:07:42 -- accel/accel.sh@18 -- # out=' 00:07:23.087 SPDK Configuration: 00:07:23.087 Core mask: 0x1 00:07:23.087 00:07:23.087 Accel Perf Configuration: 00:07:23.087 Workload Type: copy 00:07:23.087 Transfer size: 4096 bytes 00:07:23.087 Vector count 1 00:07:23.087 Module: software 00:07:23.087 Queue depth: 32 00:07:23.087 Allocate depth: 32 00:07:23.087 # threads/core: 1 00:07:23.087 Run time: 1 seconds 00:07:23.087 Verify: Yes 00:07:23.087 00:07:23.087 Running for 1 seconds... 00:07:23.087 00:07:23.087 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.087 ------------------------------------------------------------------------------------ 00:07:23.087 0,0 301184/s 1176 MiB/s 0 0 00:07:23.087 ==================================================================================== 00:07:23.087 Total 301184/s 1176 MiB/s 0 0' 00:07:23.087 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.087 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.087 10:07:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:23.087 10:07:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:23.087 10:07:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.087 10:07:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.087 10:07:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.087 10:07:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.087 10:07:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.087 10:07:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.087 10:07:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.087 10:07:42 -- accel/accel.sh@42 -- # jq -r . 00:07:23.087 [2024-11-19 10:07:42.309577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:23.087 [2024-11-19 10:07:42.309667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70341 ] 00:07:23.087 [2024-11-19 10:07:42.447932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.087 [2024-11-19 10:07:42.481161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.087 10:07:42 -- accel/accel.sh@21 -- # val= 00:07:23.088 10:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.088 10:07:42 -- accel/accel.sh@21 -- # val= 00:07:23.088 10:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.088 10:07:42 -- accel/accel.sh@21 -- # val=0x1 00:07:23.088 10:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.088 10:07:42 -- accel/accel.sh@21 -- # val= 00:07:23.088 10:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.088 10:07:42 -- accel/accel.sh@21 -- # val= 00:07:23.088 10:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.088 10:07:42 -- accel/accel.sh@21 -- # val=copy 00:07:23.088 10:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.088 10:07:42 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.088 10:07:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.088 10:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.088 10:07:42 -- accel/accel.sh@21 -- # val= 00:07:23.088 10:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.088 10:07:42 -- accel/accel.sh@21 -- # val=software 00:07:23.088 10:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.088 10:07:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.088 10:07:42 -- accel/accel.sh@21 -- # val=32 00:07:23.088 10:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.088 10:07:42 -- accel/accel.sh@21 -- # val=32 00:07:23.088 10:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.088 10:07:42 -- accel/accel.sh@21 -- # val=1 00:07:23.088 10:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.088 10:07:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.088 10:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.088 10:07:42 -- accel/accel.sh@21 -- # val=Yes 00:07:23.088 10:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.088 10:07:42 -- accel/accel.sh@21 -- # val= 00:07:23.088 10:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:23.088 10:07:42 -- accel/accel.sh@21 -- # val= 00:07:23.088 10:07:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # IFS=: 00:07:23.088 10:07:42 -- accel/accel.sh@20 -- # read -r var val 00:07:24.460 10:07:43 -- accel/accel.sh@21 -- # val= 00:07:24.460 10:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.460 10:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.460 10:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.460 10:07:43 -- accel/accel.sh@21 -- # val= 00:07:24.460 10:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.460 10:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.460 10:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.460 10:07:43 -- accel/accel.sh@21 -- # val= 00:07:24.460 10:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.460 10:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.460 10:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.460 10:07:43 -- accel/accel.sh@21 -- # val= 00:07:24.460 10:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.460 10:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.460 10:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.460 10:07:43 -- accel/accel.sh@21 -- # val= 00:07:24.460 10:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.460 10:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.460 10:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.460 10:07:43 -- accel/accel.sh@21 -- # val= 00:07:24.460 10:07:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.460 10:07:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.460 10:07:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.460 10:07:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.460 10:07:43 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:24.460 10:07:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.460 00:07:24.460 real 0m2.641s 00:07:24.460 user 0m2.300s 00:07:24.460 sys 0m0.136s 00:07:24.460 10:07:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:24.460 10:07:43 -- common/autotest_common.sh@10 -- # set +x 00:07:24.460 ************************************ 00:07:24.460 END TEST accel_copy 00:07:24.460 ************************************ 00:07:24.460 10:07:43 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.460 10:07:43 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:24.460 10:07:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.460 10:07:43 -- common/autotest_common.sh@10 -- # set +x 00:07:24.460 ************************************ 00:07:24.460 START TEST accel_fill 00:07:24.460 ************************************ 00:07:24.460 10:07:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.460 10:07:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.460 10:07:43 -- accel/accel.sh@17 -- # local accel_module 00:07:24.460 10:07:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.460 10:07:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:24.460 10:07:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.460 10:07:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.460 10:07:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.460 10:07:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.460 10:07:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.460 10:07:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.460 10:07:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.460 10:07:43 -- accel/accel.sh@42 -- # jq -r . 00:07:24.460 [2024-11-19 10:07:43.670917] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:24.460 [2024-11-19 10:07:43.671010] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70376 ] 00:07:24.460 [2024-11-19 10:07:43.808232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.460 [2024-11-19 10:07:43.841218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.839 10:07:44 -- accel/accel.sh@18 -- # out=' 00:07:25.839 SPDK Configuration: 00:07:25.839 Core mask: 0x1 00:07:25.839 00:07:25.839 Accel Perf Configuration: 00:07:25.839 Workload Type: fill 00:07:25.839 Fill pattern: 0x80 00:07:25.839 Transfer size: 4096 bytes 00:07:25.839 Vector count 1 00:07:25.839 Module: software 00:07:25.839 Queue depth: 64 00:07:25.839 Allocate depth: 64 00:07:25.839 # threads/core: 1 00:07:25.839 Run time: 1 seconds 00:07:25.839 Verify: Yes 00:07:25.839 00:07:25.839 Running for 1 seconds... 00:07:25.839 00:07:25.839 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.839 ------------------------------------------------------------------------------------ 00:07:25.839 0,0 446208/s 1743 MiB/s 0 0 00:07:25.839 ==================================================================================== 00:07:25.839 Total 446208/s 1743 MiB/s 0 0' 00:07:25.839 10:07:44 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.839 10:07:44 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.839 10:07:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.839 10:07:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.839 10:07:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.839 10:07:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.839 10:07:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.839 10:07:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.839 10:07:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.839 10:07:44 -- accel/accel.sh@42 -- # jq -r . 00:07:25.839 [2024-11-19 10:07:44.986561] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:25.839 [2024-11-19 10:07:44.986652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70390 ] 00:07:25.839 [2024-11-19 10:07:45.124362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.839 [2024-11-19 10:07:45.158577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val= 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val= 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val=0x1 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val= 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val= 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val=fill 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val=0x80 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val= 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val=software 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val=64 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val=64 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val=1 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val=Yes 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val= 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.839 10:07:45 -- accel/accel.sh@21 -- # val= 00:07:25.839 10:07:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.839 10:07:45 -- accel/accel.sh@20 -- # read -r var val 00:07:26.773 10:07:46 -- accel/accel.sh@21 -- # val= 00:07:26.773 10:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.773 10:07:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.773 10:07:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.773 10:07:46 -- accel/accel.sh@21 -- # val= 00:07:26.773 10:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.773 10:07:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.773 10:07:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.773 10:07:46 -- accel/accel.sh@21 -- # val= 00:07:26.773 10:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.773 10:07:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.773 10:07:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.773 10:07:46 -- accel/accel.sh@21 -- # val= 00:07:26.773 10:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.773 10:07:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.773 10:07:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.773 10:07:46 -- accel/accel.sh@21 -- # val= 00:07:26.773 10:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.773 10:07:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.773 10:07:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.773 10:07:46 -- accel/accel.sh@21 -- # val= 00:07:26.773 10:07:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.773 10:07:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.773 10:07:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.773 10:07:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.773 10:07:46 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:26.773 10:07:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.773 00:07:26.773 real 0m2.640s 00:07:26.773 user 0m2.287s 00:07:26.773 sys 0m0.152s 00:07:26.773 10:07:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.773 10:07:46 -- common/autotest_common.sh@10 -- # set +x 00:07:26.773 ************************************ 00:07:26.773 END TEST accel_fill 00:07:26.773 ************************************ 00:07:27.031 10:07:46 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:27.031 10:07:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:27.031 10:07:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.031 10:07:46 -- common/autotest_common.sh@10 -- # set +x 00:07:27.031 ************************************ 00:07:27.031 START TEST accel_copy_crc32c 00:07:27.031 ************************************ 00:07:27.031 10:07:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:27.031 10:07:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.031 10:07:46 -- accel/accel.sh@17 -- # local accel_module 00:07:27.031 10:07:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:27.031 10:07:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:27.031 10:07:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.031 10:07:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.031 10:07:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.031 10:07:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.031 10:07:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.031 10:07:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.031 10:07:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.031 10:07:46 -- accel/accel.sh@42 -- # jq -r . 00:07:27.031 [2024-11-19 10:07:46.357325] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:27.031 [2024-11-19 10:07:46.357416] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70424 ] 00:07:27.031 [2024-11-19 10:07:46.491529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.031 [2024-11-19 10:07:46.524872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.403 10:07:47 -- accel/accel.sh@18 -- # out=' 00:07:28.403 SPDK Configuration: 00:07:28.403 Core mask: 0x1 00:07:28.403 00:07:28.403 Accel Perf Configuration: 00:07:28.403 Workload Type: copy_crc32c 00:07:28.403 CRC-32C seed: 0 00:07:28.403 Vector size: 4096 bytes 00:07:28.403 Transfer size: 4096 bytes 00:07:28.403 Vector count 1 00:07:28.403 Module: software 00:07:28.403 Queue depth: 32 00:07:28.403 Allocate depth: 32 00:07:28.403 # threads/core: 1 00:07:28.403 Run time: 1 seconds 00:07:28.403 Verify: Yes 00:07:28.403 00:07:28.403 Running for 1 seconds... 00:07:28.403 00:07:28.403 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.403 ------------------------------------------------------------------------------------ 00:07:28.403 0,0 238880/s 933 MiB/s 0 0 00:07:28.403 ==================================================================================== 00:07:28.403 Total 238880/s 933 MiB/s 0 0' 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:28.403 10:07:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.403 10:07:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.403 10:07:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.403 10:07:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.403 10:07:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.403 10:07:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.403 10:07:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.403 10:07:47 -- accel/accel.sh@42 -- # jq -r . 00:07:28.403 [2024-11-19 10:07:47.671877] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:28.403 [2024-11-19 10:07:47.671967] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70444 ] 00:07:28.403 [2024-11-19 10:07:47.806247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.403 [2024-11-19 10:07:47.839689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val= 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val= 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val=0x1 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val= 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val= 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val=0 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val= 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val=software 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val=32 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val=32 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val=1 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val=Yes 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val= 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.403 10:07:47 -- accel/accel.sh@21 -- # val= 00:07:28.403 10:07:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.403 10:07:47 -- accel/accel.sh@20 -- # read -r var val 00:07:29.774 10:07:48 -- accel/accel.sh@21 -- # val= 00:07:29.774 10:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.774 10:07:48 -- accel/accel.sh@20 -- # IFS=: 00:07:29.774 10:07:48 -- accel/accel.sh@20 -- # read -r var val 00:07:29.774 10:07:48 -- accel/accel.sh@21 -- # val= 00:07:29.774 10:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.774 10:07:48 -- accel/accel.sh@20 -- # IFS=: 00:07:29.774 10:07:48 -- accel/accel.sh@20 -- # read -r var val 00:07:29.774 10:07:48 -- accel/accel.sh@21 -- # val= 00:07:29.774 10:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.774 10:07:48 -- accel/accel.sh@20 -- # IFS=: 00:07:29.774 10:07:48 -- accel/accel.sh@20 -- # read -r var val 00:07:29.774 10:07:48 -- accel/accel.sh@21 -- # val= 00:07:29.774 10:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.774 10:07:48 -- accel/accel.sh@20 -- # IFS=: 00:07:29.774 10:07:48 -- accel/accel.sh@20 -- # read -r var val 00:07:29.774 10:07:48 -- accel/accel.sh@21 -- # val= 00:07:29.774 10:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.774 10:07:48 -- accel/accel.sh@20 -- # IFS=: 00:07:29.774 10:07:48 -- accel/accel.sh@20 -- # read -r var val 00:07:29.774 ************************************ 00:07:29.774 END TEST accel_copy_crc32c 00:07:29.774 ************************************ 00:07:29.774 10:07:48 -- accel/accel.sh@21 -- # val= 00:07:29.774 10:07:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.774 10:07:48 -- accel/accel.sh@20 -- # IFS=: 00:07:29.774 10:07:48 -- accel/accel.sh@20 -- # read -r var val 00:07:29.774 10:07:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.774 10:07:48 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:29.774 10:07:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.774 00:07:29.774 real 0m2.632s 00:07:29.774 user 0m2.293s 00:07:29.774 sys 0m0.135s 00:07:29.774 10:07:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.774 10:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:29.774 10:07:49 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:29.774 10:07:49 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:29.774 10:07:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.774 10:07:49 -- common/autotest_common.sh@10 -- # set +x 00:07:29.774 ************************************ 00:07:29.774 START TEST accel_copy_crc32c_C2 00:07:29.774 ************************************ 00:07:29.774 10:07:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:29.774 10:07:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.774 10:07:49 -- accel/accel.sh@17 -- # local accel_module 00:07:29.774 10:07:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:29.774 10:07:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:29.774 10:07:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.774 10:07:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.774 10:07:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.774 10:07:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.774 10:07:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.774 10:07:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.774 10:07:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.774 10:07:49 -- accel/accel.sh@42 -- # jq -r . 00:07:29.774 [2024-11-19 10:07:49.037350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:29.774 [2024-11-19 10:07:49.037437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70473 ] 00:07:29.774 [2024-11-19 10:07:49.174694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.774 [2024-11-19 10:07:49.216271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.147 10:07:50 -- accel/accel.sh@18 -- # out=' 00:07:31.147 SPDK Configuration: 00:07:31.147 Core mask: 0x1 00:07:31.147 00:07:31.147 Accel Perf Configuration: 00:07:31.147 Workload Type: copy_crc32c 00:07:31.147 CRC-32C seed: 0 00:07:31.147 Vector size: 4096 bytes 00:07:31.147 Transfer size: 8192 bytes 00:07:31.147 Vector count 2 00:07:31.147 Module: software 00:07:31.147 Queue depth: 32 00:07:31.147 Allocate depth: 32 00:07:31.147 # threads/core: 1 00:07:31.147 Run time: 1 seconds 00:07:31.147 Verify: Yes 00:07:31.147 00:07:31.147 Running for 1 seconds... 00:07:31.147 00:07:31.147 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.147 ------------------------------------------------------------------------------------ 00:07:31.147 0,0 155008/s 1211 MiB/s 0 0 00:07:31.147 ==================================================================================== 00:07:31.147 Total 155008/s 605 MiB/s 0 0' 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.147 10:07:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:31.147 10:07:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.147 10:07:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.147 10:07:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.147 10:07:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.147 10:07:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.147 10:07:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.147 10:07:50 -- accel/accel.sh@42 -- # jq -r . 00:07:31.147 [2024-11-19 10:07:50.369949] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:31.147 [2024-11-19 10:07:50.370046] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70492 ] 00:07:31.147 [2024-11-19 10:07:50.505788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.147 [2024-11-19 10:07:50.545610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val= 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val= 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val=0x1 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val= 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val= 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val=0 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val= 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val=software 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val=32 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val=32 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val=1 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val=Yes 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val= 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:31.147 10:07:50 -- accel/accel.sh@21 -- # val= 00:07:31.147 10:07:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # IFS=: 00:07:31.147 10:07:50 -- accel/accel.sh@20 -- # read -r var val 00:07:32.522 10:07:51 -- accel/accel.sh@21 -- # val= 00:07:32.522 10:07:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.522 10:07:51 -- accel/accel.sh@20 -- # IFS=: 00:07:32.522 10:07:51 -- accel/accel.sh@20 -- # read -r var val 00:07:32.522 10:07:51 -- accel/accel.sh@21 -- # val= 00:07:32.522 10:07:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.522 10:07:51 -- accel/accel.sh@20 -- # IFS=: 00:07:32.522 10:07:51 -- accel/accel.sh@20 -- # read -r var val 00:07:32.522 10:07:51 -- accel/accel.sh@21 -- # val= 00:07:32.522 10:07:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.522 10:07:51 -- accel/accel.sh@20 -- # IFS=: 00:07:32.522 10:07:51 -- accel/accel.sh@20 -- # read -r var val 00:07:32.523 10:07:51 -- accel/accel.sh@21 -- # val= 00:07:32.523 10:07:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.523 10:07:51 -- accel/accel.sh@20 -- # IFS=: 00:07:32.523 10:07:51 -- accel/accel.sh@20 -- # read -r var val 00:07:32.523 10:07:51 -- accel/accel.sh@21 -- # val= 00:07:32.523 10:07:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.523 10:07:51 -- accel/accel.sh@20 -- # IFS=: 00:07:32.523 10:07:51 -- accel/accel.sh@20 -- # read -r var val 00:07:32.523 10:07:51 -- accel/accel.sh@21 -- # val= 00:07:32.523 10:07:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.523 10:07:51 -- accel/accel.sh@20 -- # IFS=: 00:07:32.523 10:07:51 -- accel/accel.sh@20 -- # read -r var val 00:07:32.523 10:07:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.523 10:07:51 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:32.523 10:07:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.523 ************************************ 00:07:32.523 END TEST accel_copy_crc32c_C2 00:07:32.523 ************************************ 00:07:32.523 00:07:32.523 real 0m2.674s 00:07:32.523 user 0m2.305s 00:07:32.523 sys 0m0.160s 00:07:32.523 10:07:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.523 10:07:51 -- common/autotest_common.sh@10 -- # set +x 00:07:32.523 10:07:51 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:32.523 10:07:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:32.523 10:07:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.523 10:07:51 -- common/autotest_common.sh@10 -- # set +x 00:07:32.523 ************************************ 00:07:32.523 START TEST accel_dualcast 00:07:32.523 ************************************ 00:07:32.523 10:07:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:32.523 10:07:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.523 10:07:51 -- accel/accel.sh@17 -- # local accel_module 00:07:32.523 10:07:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:32.523 10:07:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:32.523 10:07:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.523 10:07:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.523 10:07:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.523 10:07:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.523 10:07:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.523 10:07:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.523 10:07:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.523 10:07:51 -- accel/accel.sh@42 -- # jq -r . 00:07:32.523 [2024-11-19 10:07:51.768529] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:32.523 [2024-11-19 10:07:51.768629] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70527 ] 00:07:32.523 [2024-11-19 10:07:51.904260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.523 [2024-11-19 10:07:51.939258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.897 10:07:53 -- accel/accel.sh@18 -- # out=' 00:07:33.897 SPDK Configuration: 00:07:33.897 Core mask: 0x1 00:07:33.897 00:07:33.897 Accel Perf Configuration: 00:07:33.897 Workload Type: dualcast 00:07:33.897 Transfer size: 4096 bytes 00:07:33.897 Vector count 1 00:07:33.897 Module: software 00:07:33.897 Queue depth: 32 00:07:33.897 Allocate depth: 32 00:07:33.897 # threads/core: 1 00:07:33.897 Run time: 1 seconds 00:07:33.897 Verify: Yes 00:07:33.897 00:07:33.897 Running for 1 seconds... 00:07:33.897 00:07:33.897 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.897 ------------------------------------------------------------------------------------ 00:07:33.897 0,0 332864/s 1300 MiB/s 0 0 00:07:33.897 ==================================================================================== 00:07:33.897 Total 332864/s 1300 MiB/s 0 0' 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.897 10:07:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:33.897 10:07:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:33.897 10:07:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.897 10:07:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.897 10:07:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.897 10:07:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.897 10:07:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.897 10:07:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.897 10:07:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.897 10:07:53 -- accel/accel.sh@42 -- # jq -r . 00:07:33.897 [2024-11-19 10:07:53.085277] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:33.897 [2024-11-19 10:07:53.085368] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70541 ] 00:07:33.897 [2024-11-19 10:07:53.214959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.897 [2024-11-19 10:07:53.249810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.897 10:07:53 -- accel/accel.sh@21 -- # val= 00:07:33.897 10:07:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.897 10:07:53 -- accel/accel.sh@21 -- # val= 00:07:33.897 10:07:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.897 10:07:53 -- accel/accel.sh@21 -- # val=0x1 00:07:33.897 10:07:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.897 10:07:53 -- accel/accel.sh@21 -- # val= 00:07:33.897 10:07:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.897 10:07:53 -- accel/accel.sh@21 -- # val= 00:07:33.897 10:07:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.897 10:07:53 -- accel/accel.sh@21 -- # val=dualcast 00:07:33.897 10:07:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.897 10:07:53 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.897 10:07:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.897 10:07:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.897 10:07:53 -- accel/accel.sh@21 -- # val= 00:07:33.897 10:07:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.897 10:07:53 -- accel/accel.sh@21 -- # val=software 00:07:33.897 10:07:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.897 10:07:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.897 10:07:53 -- accel/accel.sh@21 -- # val=32 00:07:33.897 10:07:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.897 10:07:53 -- accel/accel.sh@21 -- # val=32 00:07:33.897 10:07:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.897 10:07:53 -- accel/accel.sh@21 -- # val=1 00:07:33.897 10:07:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.897 10:07:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.897 10:07:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.897 10:07:53 -- accel/accel.sh@21 -- # val=Yes 00:07:33.897 10:07:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.897 10:07:53 -- accel/accel.sh@21 -- # val= 00:07:33.897 10:07:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.897 10:07:53 -- accel/accel.sh@21 -- # val= 00:07:33.897 10:07:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # IFS=: 00:07:33.897 10:07:53 -- accel/accel.sh@20 -- # read -r var val 00:07:35.270 10:07:54 -- accel/accel.sh@21 -- # val= 00:07:35.270 10:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.270 10:07:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.271 10:07:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.271 10:07:54 -- accel/accel.sh@21 -- # val= 00:07:35.271 10:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.271 10:07:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.271 10:07:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.271 10:07:54 -- accel/accel.sh@21 -- # val= 00:07:35.271 10:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.271 10:07:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.271 10:07:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.271 10:07:54 -- accel/accel.sh@21 -- # val= 00:07:35.271 10:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.271 10:07:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.271 10:07:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.271 ************************************ 00:07:35.271 END TEST accel_dualcast 00:07:35.271 ************************************ 00:07:35.271 10:07:54 -- accel/accel.sh@21 -- # val= 00:07:35.271 10:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.271 10:07:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.271 10:07:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.271 10:07:54 -- accel/accel.sh@21 -- # val= 00:07:35.271 10:07:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.271 10:07:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.271 10:07:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.271 10:07:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:35.271 10:07:54 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:35.271 10:07:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.271 00:07:35.271 real 0m2.639s 00:07:35.271 user 0m2.289s 00:07:35.271 sys 0m0.145s 00:07:35.271 10:07:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.271 10:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:35.271 10:07:54 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:35.271 10:07:54 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:35.271 10:07:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.271 10:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:35.271 ************************************ 00:07:35.271 START TEST accel_compare 00:07:35.271 ************************************ 00:07:35.271 10:07:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:35.271 10:07:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.271 10:07:54 -- accel/accel.sh@17 -- # local accel_module 00:07:35.271 10:07:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:35.271 10:07:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:35.271 10:07:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.271 10:07:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.271 10:07:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.271 10:07:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.271 10:07:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.271 10:07:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.271 10:07:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.271 10:07:54 -- accel/accel.sh@42 -- # jq -r . 00:07:35.271 [2024-11-19 10:07:54.452497] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:35.271 [2024-11-19 10:07:54.452715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70575 ] 00:07:35.271 [2024-11-19 10:07:54.587315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.271 [2024-11-19 10:07:54.627259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.647 10:07:55 -- accel/accel.sh@18 -- # out=' 00:07:36.647 SPDK Configuration: 00:07:36.647 Core mask: 0x1 00:07:36.647 00:07:36.647 Accel Perf Configuration: 00:07:36.647 Workload Type: compare 00:07:36.647 Transfer size: 4096 bytes 00:07:36.647 Vector count 1 00:07:36.647 Module: software 00:07:36.647 Queue depth: 32 00:07:36.647 Allocate depth: 32 00:07:36.647 # threads/core: 1 00:07:36.647 Run time: 1 seconds 00:07:36.647 Verify: Yes 00:07:36.647 00:07:36.647 Running for 1 seconds... 00:07:36.647 00:07:36.647 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:36.647 ------------------------------------------------------------------------------------ 00:07:36.647 0,0 408576/s 1596 MiB/s 0 0 00:07:36.647 ==================================================================================== 00:07:36.647 Total 408576/s 1596 MiB/s 0 0' 00:07:36.647 10:07:55 -- accel/accel.sh@20 -- # IFS=: 00:07:36.647 10:07:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:36.647 10:07:55 -- accel/accel.sh@20 -- # read -r var val 00:07:36.647 10:07:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:36.647 10:07:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.647 10:07:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.647 10:07:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.647 10:07:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.647 10:07:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.647 10:07:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.647 10:07:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.647 10:07:55 -- accel/accel.sh@42 -- # jq -r . 00:07:36.647 [2024-11-19 10:07:55.796775] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:36.647 [2024-11-19 10:07:55.797116] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70595 ] 00:07:36.647 [2024-11-19 10:07:55.931796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.647 [2024-11-19 10:07:55.966046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.647 10:07:55 -- accel/accel.sh@21 -- # val= 00:07:36.647 10:07:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.647 10:07:55 -- accel/accel.sh@20 -- # IFS=: 00:07:36.647 10:07:55 -- accel/accel.sh@20 -- # read -r var val 00:07:36.647 10:07:55 -- accel/accel.sh@21 -- # val= 00:07:36.647 10:07:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.647 10:07:55 -- accel/accel.sh@20 -- # IFS=: 00:07:36.647 10:07:55 -- accel/accel.sh@20 -- # read -r var val 00:07:36.647 10:07:55 -- accel/accel.sh@21 -- # val=0x1 00:07:36.647 10:07:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.647 10:07:55 -- accel/accel.sh@20 -- # IFS=: 00:07:36.647 10:07:55 -- accel/accel.sh@20 -- # read -r var val 00:07:36.647 10:07:55 -- accel/accel.sh@21 -- # val= 00:07:36.647 10:07:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.647 10:07:55 -- accel/accel.sh@20 -- # IFS=: 00:07:36.647 10:07:55 -- accel/accel.sh@20 -- # read -r var val 00:07:36.647 10:07:55 -- accel/accel.sh@21 -- # val= 00:07:36.647 10:07:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.647 10:07:55 -- accel/accel.sh@20 -- # IFS=: 00:07:36.647 10:07:55 -- accel/accel.sh@20 -- # read -r var val 00:07:36.647 10:07:55 -- accel/accel.sh@21 -- # val=compare 00:07:36.647 10:07:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.647 10:07:55 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:36.647 10:07:55 -- accel/accel.sh@20 -- # IFS=: 00:07:36.647 10:07:55 -- accel/accel.sh@20 -- # read -r var val 00:07:36.648 10:07:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.648 10:07:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.648 10:07:55 -- accel/accel.sh@20 -- # IFS=: 00:07:36.648 10:07:55 -- accel/accel.sh@20 -- # read -r var val 00:07:36.648 10:07:56 -- accel/accel.sh@21 -- # val= 00:07:36.648 10:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # IFS=: 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # read -r var val 00:07:36.648 10:07:56 -- accel/accel.sh@21 -- # val=software 00:07:36.648 10:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.648 10:07:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # IFS=: 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # read -r var val 00:07:36.648 10:07:56 -- accel/accel.sh@21 -- # val=32 00:07:36.648 10:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # IFS=: 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # read -r var val 00:07:36.648 10:07:56 -- accel/accel.sh@21 -- # val=32 00:07:36.648 10:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # IFS=: 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # read -r var val 00:07:36.648 10:07:56 -- accel/accel.sh@21 -- # val=1 00:07:36.648 10:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # IFS=: 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # read -r var val 00:07:36.648 10:07:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.648 10:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # IFS=: 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # read -r var val 00:07:36.648 10:07:56 -- accel/accel.sh@21 -- # val=Yes 00:07:36.648 10:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # IFS=: 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # read -r var val 00:07:36.648 10:07:56 -- accel/accel.sh@21 -- # val= 00:07:36.648 10:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # IFS=: 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # read -r var val 00:07:36.648 10:07:56 -- accel/accel.sh@21 -- # val= 00:07:36.648 10:07:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # IFS=: 00:07:36.648 10:07:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.582 10:07:57 -- accel/accel.sh@21 -- # val= 00:07:37.582 10:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.582 10:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.582 10:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.582 10:07:57 -- accel/accel.sh@21 -- # val= 00:07:37.582 10:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.582 10:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.582 10:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.582 10:07:57 -- accel/accel.sh@21 -- # val= 00:07:37.582 10:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.582 10:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.582 10:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.582 10:07:57 -- accel/accel.sh@21 -- # val= 00:07:37.582 10:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.582 10:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.582 10:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.582 10:07:57 -- accel/accel.sh@21 -- # val= 00:07:37.582 10:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.582 10:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.582 10:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.582 10:07:57 -- accel/accel.sh@21 -- # val= 00:07:37.582 10:07:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.582 10:07:57 -- accel/accel.sh@20 -- # IFS=: 00:07:37.582 10:07:57 -- accel/accel.sh@20 -- # read -r var val 00:07:37.582 10:07:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.582 ************************************ 00:07:37.582 END TEST accel_compare 00:07:37.582 ************************************ 00:07:37.582 10:07:57 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:37.582 10:07:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.582 00:07:37.582 real 0m2.669s 00:07:37.582 user 0m2.308s 00:07:37.582 sys 0m0.156s 00:07:37.582 10:07:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.582 10:07:57 -- common/autotest_common.sh@10 -- # set +x 00:07:37.841 10:07:57 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:37.841 10:07:57 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:37.841 10:07:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.841 10:07:57 -- common/autotest_common.sh@10 -- # set +x 00:07:37.841 ************************************ 00:07:37.841 START TEST accel_xor 00:07:37.841 ************************************ 00:07:37.841 10:07:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:37.841 10:07:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.841 10:07:57 -- accel/accel.sh@17 -- # local accel_module 00:07:37.841 10:07:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:37.841 10:07:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:37.841 10:07:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.841 10:07:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.841 10:07:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.841 10:07:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.841 10:07:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.841 10:07:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.841 10:07:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.841 10:07:57 -- accel/accel.sh@42 -- # jq -r . 00:07:37.841 [2024-11-19 10:07:57.170796] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:37.841 [2024-11-19 10:07:57.171052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70624 ] 00:07:37.841 [2024-11-19 10:07:57.303087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.841 [2024-11-19 10:07:57.338046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.218 10:07:58 -- accel/accel.sh@18 -- # out=' 00:07:39.218 SPDK Configuration: 00:07:39.218 Core mask: 0x1 00:07:39.218 00:07:39.218 Accel Perf Configuration: 00:07:39.218 Workload Type: xor 00:07:39.218 Source buffers: 2 00:07:39.218 Transfer size: 4096 bytes 00:07:39.218 Vector count 1 00:07:39.218 Module: software 00:07:39.218 Queue depth: 32 00:07:39.218 Allocate depth: 32 00:07:39.218 # threads/core: 1 00:07:39.218 Run time: 1 seconds 00:07:39.218 Verify: Yes 00:07:39.218 00:07:39.218 Running for 1 seconds... 00:07:39.218 00:07:39.218 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:39.218 ------------------------------------------------------------------------------------ 00:07:39.218 0,0 242816/s 948 MiB/s 0 0 00:07:39.218 ==================================================================================== 00:07:39.218 Total 242816/s 948 MiB/s 0 0' 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:39.218 10:07:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.218 10:07:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.218 10:07:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.218 10:07:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.218 10:07:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.218 10:07:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.218 10:07:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.218 10:07:58 -- accel/accel.sh@42 -- # jq -r . 00:07:39.218 [2024-11-19 10:07:58.486190] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:39.218 [2024-11-19 10:07:58.486275] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70638 ] 00:07:39.218 [2024-11-19 10:07:58.621076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.218 [2024-11-19 10:07:58.655687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val= 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val= 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val=0x1 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val= 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val= 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val=xor 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val=2 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val= 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val=software 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val=32 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val=32 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val=1 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val=Yes 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val= 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:39.218 10:07:58 -- accel/accel.sh@21 -- # val= 00:07:39.218 10:07:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # IFS=: 00:07:39.218 10:07:58 -- accel/accel.sh@20 -- # read -r var val 00:07:40.598 10:07:59 -- accel/accel.sh@21 -- # val= 00:07:40.598 10:07:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.598 10:07:59 -- accel/accel.sh@20 -- # IFS=: 00:07:40.598 10:07:59 -- accel/accel.sh@20 -- # read -r var val 00:07:40.598 10:07:59 -- accel/accel.sh@21 -- # val= 00:07:40.598 10:07:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.598 10:07:59 -- accel/accel.sh@20 -- # IFS=: 00:07:40.598 10:07:59 -- accel/accel.sh@20 -- # read -r var val 00:07:40.598 10:07:59 -- accel/accel.sh@21 -- # val= 00:07:40.598 10:07:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.598 10:07:59 -- accel/accel.sh@20 -- # IFS=: 00:07:40.598 10:07:59 -- accel/accel.sh@20 -- # read -r var val 00:07:40.598 10:07:59 -- accel/accel.sh@21 -- # val= 00:07:40.598 10:07:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.598 10:07:59 -- accel/accel.sh@20 -- # IFS=: 00:07:40.598 10:07:59 -- accel/accel.sh@20 -- # read -r var val 00:07:40.598 10:07:59 -- accel/accel.sh@21 -- # val= 00:07:40.598 ************************************ 00:07:40.598 END TEST accel_xor 00:07:40.598 ************************************ 00:07:40.598 10:07:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.598 10:07:59 -- accel/accel.sh@20 -- # IFS=: 00:07:40.598 10:07:59 -- accel/accel.sh@20 -- # read -r var val 00:07:40.598 10:07:59 -- accel/accel.sh@21 -- # val= 00:07:40.598 10:07:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.598 10:07:59 -- accel/accel.sh@20 -- # IFS=: 00:07:40.598 10:07:59 -- accel/accel.sh@20 -- # read -r var val 00:07:40.598 10:07:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.598 10:07:59 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:40.598 10:07:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.598 00:07:40.598 real 0m2.644s 00:07:40.598 user 0m2.300s 00:07:40.598 sys 0m0.142s 00:07:40.598 10:07:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.598 10:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:40.598 10:07:59 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:40.598 10:07:59 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:40.598 10:07:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.598 10:07:59 -- common/autotest_common.sh@10 -- # set +x 00:07:40.598 ************************************ 00:07:40.598 START TEST accel_xor 00:07:40.598 ************************************ 00:07:40.598 10:07:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:40.598 10:07:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.598 10:07:59 -- accel/accel.sh@17 -- # local accel_module 00:07:40.598 10:07:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:40.599 10:07:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:40.599 10:07:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.599 10:07:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.599 10:07:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.599 10:07:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.599 10:07:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.599 10:07:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.599 10:07:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.599 10:07:59 -- accel/accel.sh@42 -- # jq -r . 00:07:40.599 [2024-11-19 10:07:59.858569] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:40.599 [2024-11-19 10:07:59.858851] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70678 ] 00:07:40.599 [2024-11-19 10:07:59.996555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.599 [2024-11-19 10:08:00.031197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.995 10:08:01 -- accel/accel.sh@18 -- # out=' 00:07:41.995 SPDK Configuration: 00:07:41.995 Core mask: 0x1 00:07:41.995 00:07:41.995 Accel Perf Configuration: 00:07:41.995 Workload Type: xor 00:07:41.995 Source buffers: 3 00:07:41.995 Transfer size: 4096 bytes 00:07:41.995 Vector count 1 00:07:41.995 Module: software 00:07:41.995 Queue depth: 32 00:07:41.995 Allocate depth: 32 00:07:41.995 # threads/core: 1 00:07:41.995 Run time: 1 seconds 00:07:41.995 Verify: Yes 00:07:41.995 00:07:41.995 Running for 1 seconds... 00:07:41.995 00:07:41.995 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:41.995 ------------------------------------------------------------------------------------ 00:07:41.995 0,0 230240/s 899 MiB/s 0 0 00:07:41.995 ==================================================================================== 00:07:41.995 Total 230240/s 899 MiB/s 0 0' 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:41.995 10:08:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.995 10:08:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:41.995 10:08:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.995 10:08:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.995 10:08:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.995 10:08:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.995 10:08:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.995 10:08:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.995 10:08:01 -- accel/accel.sh@42 -- # jq -r . 00:07:41.995 [2024-11-19 10:08:01.183426] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:41.995 [2024-11-19 10:08:01.183693] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70692 ] 00:07:41.995 [2024-11-19 10:08:01.318029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.995 [2024-11-19 10:08:01.351414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val= 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val= 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val=0x1 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val= 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val= 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val=xor 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val=3 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val= 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val=software 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val=32 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val=32 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val=1 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val=Yes 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val= 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:41.995 10:08:01 -- accel/accel.sh@21 -- # val= 00:07:41.995 10:08:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # IFS=: 00:07:41.995 10:08:01 -- accel/accel.sh@20 -- # read -r var val 00:07:42.931 10:08:02 -- accel/accel.sh@21 -- # val= 00:07:42.931 10:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.931 10:08:02 -- accel/accel.sh@20 -- # IFS=: 00:07:42.931 10:08:02 -- accel/accel.sh@20 -- # read -r var val 00:07:42.931 10:08:02 -- accel/accel.sh@21 -- # val= 00:07:42.931 10:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.931 10:08:02 -- accel/accel.sh@20 -- # IFS=: 00:07:42.931 10:08:02 -- accel/accel.sh@20 -- # read -r var val 00:07:42.931 10:08:02 -- accel/accel.sh@21 -- # val= 00:07:42.931 10:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.931 10:08:02 -- accel/accel.sh@20 -- # IFS=: 00:07:42.931 10:08:02 -- accel/accel.sh@20 -- # read -r var val 00:07:42.931 10:08:02 -- accel/accel.sh@21 -- # val= 00:07:42.931 10:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.931 10:08:02 -- accel/accel.sh@20 -- # IFS=: 00:07:42.931 10:08:02 -- accel/accel.sh@20 -- # read -r var val 00:07:42.931 10:08:02 -- accel/accel.sh@21 -- # val= 00:07:42.931 10:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.931 10:08:02 -- accel/accel.sh@20 -- # IFS=: 00:07:43.190 10:08:02 -- accel/accel.sh@20 -- # read -r var val 00:07:43.190 10:08:02 -- accel/accel.sh@21 -- # val= 00:07:43.190 10:08:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.190 10:08:02 -- accel/accel.sh@20 -- # IFS=: 00:07:43.190 10:08:02 -- accel/accel.sh@20 -- # read -r var val 00:07:43.190 10:08:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.190 10:08:02 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:43.190 ************************************ 00:07:43.190 END TEST accel_xor 00:07:43.190 ************************************ 00:07:43.190 10:08:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.190 00:07:43.190 real 0m2.645s 00:07:43.190 user 0m2.295s 00:07:43.190 sys 0m0.144s 00:07:43.190 10:08:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.190 10:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.190 10:08:02 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:43.190 10:08:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:43.190 10:08:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.190 10:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:43.190 ************************************ 00:07:43.190 START TEST accel_dif_verify 00:07:43.190 ************************************ 00:07:43.190 10:08:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:43.190 10:08:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.190 10:08:02 -- accel/accel.sh@17 -- # local accel_module 00:07:43.190 10:08:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:43.190 10:08:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:43.190 10:08:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.190 10:08:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.190 10:08:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.190 10:08:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.190 10:08:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.190 10:08:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.190 10:08:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.190 10:08:02 -- accel/accel.sh@42 -- # jq -r . 00:07:43.190 [2024-11-19 10:08:02.546409] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:43.190 [2024-11-19 10:08:02.546549] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70721 ] 00:07:43.190 [2024-11-19 10:08:02.682762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.190 [2024-11-19 10:08:02.725327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.569 10:08:03 -- accel/accel.sh@18 -- # out=' 00:07:44.569 SPDK Configuration: 00:07:44.569 Core mask: 0x1 00:07:44.569 00:07:44.569 Accel Perf Configuration: 00:07:44.569 Workload Type: dif_verify 00:07:44.569 Vector size: 4096 bytes 00:07:44.569 Transfer size: 4096 bytes 00:07:44.569 Block size: 512 bytes 00:07:44.569 Metadata size: 8 bytes 00:07:44.569 Vector count 1 00:07:44.569 Module: software 00:07:44.569 Queue depth: 32 00:07:44.569 Allocate depth: 32 00:07:44.569 # threads/core: 1 00:07:44.569 Run time: 1 seconds 00:07:44.569 Verify: No 00:07:44.569 00:07:44.569 Running for 1 seconds... 00:07:44.569 00:07:44.569 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.569 ------------------------------------------------------------------------------------ 00:07:44.569 0,0 90400/s 358 MiB/s 0 0 00:07:44.569 ==================================================================================== 00:07:44.569 Total 90400/s 353 MiB/s 0 0' 00:07:44.569 10:08:03 -- accel/accel.sh@20 -- # IFS=: 00:07:44.569 10:08:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:44.569 10:08:03 -- accel/accel.sh@20 -- # read -r var val 00:07:44.569 10:08:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:44.569 10:08:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.569 10:08:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.569 10:08:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.569 10:08:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.569 10:08:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.569 10:08:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.569 10:08:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.569 10:08:03 -- accel/accel.sh@42 -- # jq -r . 00:07:44.569 [2024-11-19 10:08:03.888273] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:44.569 [2024-11-19 10:08:03.888415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70745 ] 00:07:44.569 [2024-11-19 10:08:04.029575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.569 [2024-11-19 10:08:04.064327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.569 10:08:04 -- accel/accel.sh@21 -- # val= 00:07:44.569 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.569 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.569 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.569 10:08:04 -- accel/accel.sh@21 -- # val= 00:07:44.569 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.569 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.569 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.569 10:08:04 -- accel/accel.sh@21 -- # val=0x1 00:07:44.569 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.569 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.569 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.569 10:08:04 -- accel/accel.sh@21 -- # val= 00:07:44.569 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.569 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.569 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.569 10:08:04 -- accel/accel.sh@21 -- # val= 00:07:44.569 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.569 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.569 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.569 10:08:04 -- accel/accel.sh@21 -- # val=dif_verify 00:07:44.569 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.569 10:08:04 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:44.569 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.569 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.569 10:08:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:44.570 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 10:08:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:44.570 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 10:08:04 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:44.570 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 10:08:04 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:44.570 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 10:08:04 -- accel/accel.sh@21 -- # val= 00:07:44.570 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 10:08:04 -- accel/accel.sh@21 -- # val=software 00:07:44.570 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 10:08:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 10:08:04 -- accel/accel.sh@21 -- # val=32 00:07:44.570 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 10:08:04 -- accel/accel.sh@21 -- # val=32 00:07:44.570 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 10:08:04 -- accel/accel.sh@21 -- # val=1 00:07:44.570 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 10:08:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:44.570 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 10:08:04 -- accel/accel.sh@21 -- # val=No 00:07:44.570 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 10:08:04 -- accel/accel.sh@21 -- # val= 00:07:44.570 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 10:08:04 -- accel/accel.sh@21 -- # val= 00:07:44.570 10:08:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 10:08:04 -- accel/accel.sh@20 -- # read -r var val 00:07:45.948 10:08:05 -- accel/accel.sh@21 -- # val= 00:07:45.948 10:08:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.948 10:08:05 -- accel/accel.sh@20 -- # IFS=: 00:07:45.948 10:08:05 -- accel/accel.sh@20 -- # read -r var val 00:07:45.948 10:08:05 -- accel/accel.sh@21 -- # val= 00:07:45.948 10:08:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.948 10:08:05 -- accel/accel.sh@20 -- # IFS=: 00:07:45.948 10:08:05 -- accel/accel.sh@20 -- # read -r var val 00:07:45.948 10:08:05 -- accel/accel.sh@21 -- # val= 00:07:45.948 10:08:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.948 10:08:05 -- accel/accel.sh@20 -- # IFS=: 00:07:45.948 10:08:05 -- accel/accel.sh@20 -- # read -r var val 00:07:45.948 10:08:05 -- accel/accel.sh@21 -- # val= 00:07:45.948 10:08:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.948 10:08:05 -- accel/accel.sh@20 -- # IFS=: 00:07:45.948 10:08:05 -- accel/accel.sh@20 -- # read -r var val 00:07:45.948 10:08:05 -- accel/accel.sh@21 -- # val= 00:07:45.948 10:08:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.948 10:08:05 -- accel/accel.sh@20 -- # IFS=: 00:07:45.948 10:08:05 -- accel/accel.sh@20 -- # read -r var val 00:07:45.948 10:08:05 -- accel/accel.sh@21 -- # val= 00:07:45.948 ************************************ 00:07:45.948 END TEST accel_dif_verify 00:07:45.948 ************************************ 00:07:45.948 10:08:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.948 10:08:05 -- accel/accel.sh@20 -- # IFS=: 00:07:45.948 10:08:05 -- accel/accel.sh@20 -- # read -r var val 00:07:45.948 10:08:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:45.948 10:08:05 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:45.948 10:08:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.948 00:07:45.948 real 0m2.671s 00:07:45.948 user 0m2.305s 00:07:45.948 sys 0m0.160s 00:07:45.948 10:08:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.948 10:08:05 -- common/autotest_common.sh@10 -- # set +x 00:07:45.948 10:08:05 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:45.948 10:08:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:45.948 10:08:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.948 10:08:05 -- common/autotest_common.sh@10 -- # set +x 00:07:45.948 ************************************ 00:07:45.948 START TEST accel_dif_generate 00:07:45.948 ************************************ 00:07:45.948 10:08:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:45.948 10:08:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.948 10:08:05 -- accel/accel.sh@17 -- # local accel_module 00:07:45.948 10:08:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:45.948 10:08:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:45.948 10:08:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.948 10:08:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.948 10:08:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.948 10:08:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.948 10:08:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.948 10:08:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.948 10:08:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.948 10:08:05 -- accel/accel.sh@42 -- # jq -r . 00:07:45.948 [2024-11-19 10:08:05.263965] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:45.948 [2024-11-19 10:08:05.264062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70775 ] 00:07:45.948 [2024-11-19 10:08:05.405757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.948 [2024-11-19 10:08:05.441607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.325 10:08:06 -- accel/accel.sh@18 -- # out=' 00:07:47.325 SPDK Configuration: 00:07:47.325 Core mask: 0x1 00:07:47.325 00:07:47.325 Accel Perf Configuration: 00:07:47.325 Workload Type: dif_generate 00:07:47.325 Vector size: 4096 bytes 00:07:47.325 Transfer size: 4096 bytes 00:07:47.325 Block size: 512 bytes 00:07:47.325 Metadata size: 8 bytes 00:07:47.325 Vector count 1 00:07:47.325 Module: software 00:07:47.325 Queue depth: 32 00:07:47.325 Allocate depth: 32 00:07:47.325 # threads/core: 1 00:07:47.325 Run time: 1 seconds 00:07:47.325 Verify: No 00:07:47.325 00:07:47.325 Running for 1 seconds... 00:07:47.325 00:07:47.325 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:47.325 ------------------------------------------------------------------------------------ 00:07:47.325 0,0 114752/s 455 MiB/s 0 0 00:07:47.325 ==================================================================================== 00:07:47.325 Total 114752/s 448 MiB/s 0 0' 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.325 10:08:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:47.325 10:08:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:47.325 10:08:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.325 10:08:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.325 10:08:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.325 10:08:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.325 10:08:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.325 10:08:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.325 10:08:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.325 10:08:06 -- accel/accel.sh@42 -- # jq -r . 00:07:47.325 [2024-11-19 10:08:06.595758] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:47.325 [2024-11-19 10:08:06.595872] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70789 ] 00:07:47.325 [2024-11-19 10:08:06.734018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.325 [2024-11-19 10:08:06.771075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.325 10:08:06 -- accel/accel.sh@21 -- # val= 00:07:47.325 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.325 10:08:06 -- accel/accel.sh@21 -- # val= 00:07:47.325 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.325 10:08:06 -- accel/accel.sh@21 -- # val=0x1 00:07:47.325 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.325 10:08:06 -- accel/accel.sh@21 -- # val= 00:07:47.325 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.325 10:08:06 -- accel/accel.sh@21 -- # val= 00:07:47.325 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.325 10:08:06 -- accel/accel.sh@21 -- # val=dif_generate 00:07:47.325 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.325 10:08:06 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.325 10:08:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:47.325 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.325 10:08:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:47.325 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.325 10:08:06 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:47.325 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.325 10:08:06 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:47.325 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.325 10:08:06 -- accel/accel.sh@21 -- # val= 00:07:47.325 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.325 10:08:06 -- accel/accel.sh@21 -- # val=software 00:07:47.325 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.325 10:08:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.325 10:08:06 -- accel/accel.sh@21 -- # val=32 00:07:47.325 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.325 10:08:06 -- accel/accel.sh@21 -- # val=32 00:07:47.325 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.325 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.325 10:08:06 -- accel/accel.sh@21 -- # val=1 00:07:47.325 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.326 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.326 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.326 10:08:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:47.326 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.326 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.326 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.326 10:08:06 -- accel/accel.sh@21 -- # val=No 00:07:47.326 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.326 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.326 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.326 10:08:06 -- accel/accel.sh@21 -- # val= 00:07:47.326 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.326 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.326 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:47.326 10:08:06 -- accel/accel.sh@21 -- # val= 00:07:47.326 10:08:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.326 10:08:06 -- accel/accel.sh@20 -- # IFS=: 00:07:47.326 10:08:06 -- accel/accel.sh@20 -- # read -r var val 00:07:48.703 10:08:07 -- accel/accel.sh@21 -- # val= 00:07:48.703 10:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.703 10:08:07 -- accel/accel.sh@20 -- # IFS=: 00:07:48.703 10:08:07 -- accel/accel.sh@20 -- # read -r var val 00:07:48.703 10:08:07 -- accel/accel.sh@21 -- # val= 00:07:48.703 10:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.703 10:08:07 -- accel/accel.sh@20 -- # IFS=: 00:07:48.703 10:08:07 -- accel/accel.sh@20 -- # read -r var val 00:07:48.703 10:08:07 -- accel/accel.sh@21 -- # val= 00:07:48.703 10:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.703 10:08:07 -- accel/accel.sh@20 -- # IFS=: 00:07:48.703 10:08:07 -- accel/accel.sh@20 -- # read -r var val 00:07:48.703 10:08:07 -- accel/accel.sh@21 -- # val= 00:07:48.703 10:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.703 10:08:07 -- accel/accel.sh@20 -- # IFS=: 00:07:48.703 10:08:07 -- accel/accel.sh@20 -- # read -r var val 00:07:48.703 10:08:07 -- accel/accel.sh@21 -- # val= 00:07:48.703 10:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.703 10:08:07 -- accel/accel.sh@20 -- # IFS=: 00:07:48.703 10:08:07 -- accel/accel.sh@20 -- # read -r var val 00:07:48.703 10:08:07 -- accel/accel.sh@21 -- # val= 00:07:48.703 10:08:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.703 10:08:07 -- accel/accel.sh@20 -- # IFS=: 00:07:48.703 10:08:07 -- accel/accel.sh@20 -- # read -r var val 00:07:48.703 10:08:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:48.703 10:08:07 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:48.703 10:08:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.703 00:07:48.703 real 0m2.666s 00:07:48.703 user 0m2.299s 00:07:48.703 sys 0m0.162s 00:07:48.703 10:08:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.703 10:08:07 -- common/autotest_common.sh@10 -- # set +x 00:07:48.703 ************************************ 00:07:48.703 END TEST accel_dif_generate 00:07:48.703 ************************************ 00:07:48.703 10:08:07 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:48.703 10:08:07 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:48.703 10:08:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.703 10:08:07 -- common/autotest_common.sh@10 -- # set +x 00:07:48.703 ************************************ 00:07:48.703 START TEST accel_dif_generate_copy 00:07:48.703 ************************************ 00:07:48.703 10:08:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:48.703 10:08:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:48.703 10:08:07 -- accel/accel.sh@17 -- # local accel_module 00:07:48.703 10:08:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:48.703 10:08:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:48.703 10:08:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.703 10:08:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.703 10:08:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.703 10:08:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.703 10:08:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.703 10:08:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.703 10:08:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.703 10:08:07 -- accel/accel.sh@42 -- # jq -r . 00:07:48.703 [2024-11-19 10:08:07.984167] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:48.703 [2024-11-19 10:08:07.984252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70824 ] 00:07:48.703 [2024-11-19 10:08:08.120383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.703 [2024-11-19 10:08:08.167180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.081 10:08:09 -- accel/accel.sh@18 -- # out=' 00:07:50.081 SPDK Configuration: 00:07:50.081 Core mask: 0x1 00:07:50.081 00:07:50.081 Accel Perf Configuration: 00:07:50.081 Workload Type: dif_generate_copy 00:07:50.081 Vector size: 4096 bytes 00:07:50.081 Transfer size: 4096 bytes 00:07:50.081 Vector count 1 00:07:50.081 Module: software 00:07:50.081 Queue depth: 32 00:07:50.081 Allocate depth: 32 00:07:50.081 # threads/core: 1 00:07:50.081 Run time: 1 seconds 00:07:50.081 Verify: No 00:07:50.081 00:07:50.081 Running for 1 seconds... 00:07:50.081 00:07:50.081 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:50.081 ------------------------------------------------------------------------------------ 00:07:50.081 0,0 84224/s 334 MiB/s 0 0 00:07:50.081 ==================================================================================== 00:07:50.081 Total 84224/s 329 MiB/s 0 0' 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:50.081 10:08:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:50.081 10:08:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.081 10:08:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.081 10:08:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.081 10:08:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.081 10:08:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.081 10:08:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.081 10:08:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.081 10:08:09 -- accel/accel.sh@42 -- # jq -r . 00:07:50.081 [2024-11-19 10:08:09.331377] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:50.081 [2024-11-19 10:08:09.331795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70843 ] 00:07:50.081 [2024-11-19 10:08:09.469950] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.081 [2024-11-19 10:08:09.506811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val= 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val= 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val=0x1 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val= 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val= 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val= 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val=software 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val=32 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val=32 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val=1 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val=No 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val= 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:50.081 10:08:09 -- accel/accel.sh@21 -- # val= 00:07:50.081 10:08:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # IFS=: 00:07:50.081 10:08:09 -- accel/accel.sh@20 -- # read -r var val 00:07:51.457 10:08:10 -- accel/accel.sh@21 -- # val= 00:07:51.457 10:08:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.457 10:08:10 -- accel/accel.sh@20 -- # IFS=: 00:07:51.457 10:08:10 -- accel/accel.sh@20 -- # read -r var val 00:07:51.457 10:08:10 -- accel/accel.sh@21 -- # val= 00:07:51.457 10:08:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.457 10:08:10 -- accel/accel.sh@20 -- # IFS=: 00:07:51.457 10:08:10 -- accel/accel.sh@20 -- # read -r var val 00:07:51.457 10:08:10 -- accel/accel.sh@21 -- # val= 00:07:51.457 10:08:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.457 10:08:10 -- accel/accel.sh@20 -- # IFS=: 00:07:51.457 10:08:10 -- accel/accel.sh@20 -- # read -r var val 00:07:51.457 10:08:10 -- accel/accel.sh@21 -- # val= 00:07:51.457 10:08:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.457 10:08:10 -- accel/accel.sh@20 -- # IFS=: 00:07:51.457 10:08:10 -- accel/accel.sh@20 -- # read -r var val 00:07:51.457 10:08:10 -- accel/accel.sh@21 -- # val= 00:07:51.457 10:08:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.457 10:08:10 -- accel/accel.sh@20 -- # IFS=: 00:07:51.457 10:08:10 -- accel/accel.sh@20 -- # read -r var val 00:07:51.457 10:08:10 -- accel/accel.sh@21 -- # val= 00:07:51.457 10:08:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.457 10:08:10 -- accel/accel.sh@20 -- # IFS=: 00:07:51.457 10:08:10 -- accel/accel.sh@20 -- # read -r var val 00:07:51.457 10:08:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:51.457 10:08:10 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:51.457 10:08:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.457 00:07:51.457 real 0m2.688s 00:07:51.457 user 0m2.324s 00:07:51.457 sys 0m0.159s 00:07:51.457 10:08:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.457 10:08:10 -- common/autotest_common.sh@10 -- # set +x 00:07:51.457 ************************************ 00:07:51.457 END TEST accel_dif_generate_copy 00:07:51.457 ************************************ 00:07:51.457 10:08:10 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:51.457 10:08:10 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.457 10:08:10 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:51.457 10:08:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.457 10:08:10 -- common/autotest_common.sh@10 -- # set +x 00:07:51.457 ************************************ 00:07:51.457 START TEST accel_comp 00:07:51.457 ************************************ 00:07:51.457 10:08:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.457 10:08:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:51.457 10:08:10 -- accel/accel.sh@17 -- # local accel_module 00:07:51.457 10:08:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.457 10:08:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.457 10:08:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.457 10:08:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.457 10:08:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.457 10:08:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.457 10:08:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.457 10:08:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.457 10:08:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.457 10:08:10 -- accel/accel.sh@42 -- # jq -r . 00:07:51.457 [2024-11-19 10:08:10.724733] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:51.457 [2024-11-19 10:08:10.724914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70872 ] 00:07:51.457 [2024-11-19 10:08:10.865291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.457 [2024-11-19 10:08:10.903760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.829 10:08:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:52.829 00:07:52.829 SPDK Configuration: 00:07:52.829 Core mask: 0x1 00:07:52.829 00:07:52.829 Accel Perf Configuration: 00:07:52.829 Workload Type: compress 00:07:52.829 Transfer size: 4096 bytes 00:07:52.829 Vector count 1 00:07:52.829 Module: software 00:07:52.829 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.829 Queue depth: 32 00:07:52.829 Allocate depth: 32 00:07:52.829 # threads/core: 1 00:07:52.829 Run time: 1 seconds 00:07:52.829 Verify: No 00:07:52.829 00:07:52.829 Running for 1 seconds... 00:07:52.829 00:07:52.829 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:52.829 ------------------------------------------------------------------------------------ 00:07:52.829 0,0 41984/s 175 MiB/s 0 0 00:07:52.829 ==================================================================================== 00:07:52.829 Total 41984/s 164 MiB/s 0 0' 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.829 10:08:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.829 10:08:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.829 10:08:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.829 10:08:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.829 10:08:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.829 10:08:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.829 10:08:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.829 10:08:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.829 10:08:12 -- accel/accel.sh@42 -- # jq -r . 00:07:52.829 [2024-11-19 10:08:12.065087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:52.829 [2024-11-19 10:08:12.065201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70892 ] 00:07:52.829 [2024-11-19 10:08:12.203487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.829 [2024-11-19 10:08:12.238458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val= 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val= 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val= 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val=0x1 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val= 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val= 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val=compress 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.829 10:08:12 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val= 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val=software 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.829 10:08:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val=32 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val=32 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val=1 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val=No 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.829 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.829 10:08:12 -- accel/accel.sh@21 -- # val= 00:07:52.829 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.830 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.830 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:52.830 10:08:12 -- accel/accel.sh@21 -- # val= 00:07:52.830 10:08:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.830 10:08:12 -- accel/accel.sh@20 -- # IFS=: 00:07:52.830 10:08:12 -- accel/accel.sh@20 -- # read -r var val 00:07:54.203 10:08:13 -- accel/accel.sh@21 -- # val= 00:07:54.203 10:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.203 10:08:13 -- accel/accel.sh@20 -- # IFS=: 00:07:54.203 10:08:13 -- accel/accel.sh@20 -- # read -r var val 00:07:54.203 10:08:13 -- accel/accel.sh@21 -- # val= 00:07:54.203 10:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.203 10:08:13 -- accel/accel.sh@20 -- # IFS=: 00:07:54.203 10:08:13 -- accel/accel.sh@20 -- # read -r var val 00:07:54.203 10:08:13 -- accel/accel.sh@21 -- # val= 00:07:54.203 10:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.203 10:08:13 -- accel/accel.sh@20 -- # IFS=: 00:07:54.203 10:08:13 -- accel/accel.sh@20 -- # read -r var val 00:07:54.203 10:08:13 -- accel/accel.sh@21 -- # val= 00:07:54.203 10:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.203 10:08:13 -- accel/accel.sh@20 -- # IFS=: 00:07:54.203 10:08:13 -- accel/accel.sh@20 -- # read -r var val 00:07:54.203 10:08:13 -- accel/accel.sh@21 -- # val= 00:07:54.203 10:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.203 10:08:13 -- accel/accel.sh@20 -- # IFS=: 00:07:54.203 10:08:13 -- accel/accel.sh@20 -- # read -r var val 00:07:54.203 10:08:13 -- accel/accel.sh@21 -- # val= 00:07:54.203 10:08:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.203 10:08:13 -- accel/accel.sh@20 -- # IFS=: 00:07:54.203 10:08:13 -- accel/accel.sh@20 -- # read -r var val 00:07:54.203 10:08:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:54.203 10:08:13 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:54.203 10:08:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.203 00:07:54.203 real 0m2.675s 00:07:54.203 user 0m2.301s 00:07:54.203 sys 0m0.161s 00:07:54.203 10:08:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.203 10:08:13 -- common/autotest_common.sh@10 -- # set +x 00:07:54.203 ************************************ 00:07:54.203 END TEST accel_comp 00:07:54.203 ************************************ 00:07:54.203 10:08:13 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:54.203 10:08:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:54.203 10:08:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.203 10:08:13 -- common/autotest_common.sh@10 -- # set +x 00:07:54.203 ************************************ 00:07:54.203 START TEST accel_decomp 00:07:54.203 ************************************ 00:07:54.203 10:08:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:54.204 10:08:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.204 10:08:13 -- accel/accel.sh@17 -- # local accel_module 00:07:54.204 10:08:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:54.204 10:08:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:54.204 10:08:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.204 10:08:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.204 10:08:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.204 10:08:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.204 10:08:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.204 10:08:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.204 10:08:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.204 10:08:13 -- accel/accel.sh@42 -- # jq -r . 00:07:54.204 [2024-11-19 10:08:13.432057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:54.204 [2024-11-19 10:08:13.432760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70926 ] 00:07:54.204 [2024-11-19 10:08:13.569663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.204 [2024-11-19 10:08:13.604374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.579 10:08:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:55.579 00:07:55.579 SPDK Configuration: 00:07:55.579 Core mask: 0x1 00:07:55.579 00:07:55.579 Accel Perf Configuration: 00:07:55.579 Workload Type: decompress 00:07:55.579 Transfer size: 4096 bytes 00:07:55.579 Vector count 1 00:07:55.579 Module: software 00:07:55.580 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:55.580 Queue depth: 32 00:07:55.580 Allocate depth: 32 00:07:55.580 # threads/core: 1 00:07:55.580 Run time: 1 seconds 00:07:55.580 Verify: Yes 00:07:55.580 00:07:55.580 Running for 1 seconds... 00:07:55.580 00:07:55.580 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:55.580 ------------------------------------------------------------------------------------ 00:07:55.580 0,0 63136/s 116 MiB/s 0 0 00:07:55.580 ==================================================================================== 00:07:55.580 Total 63136/s 246 MiB/s 0 0' 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:55.580 10:08:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.580 10:08:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.580 10:08:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.580 10:08:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.580 10:08:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.580 10:08:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.580 10:08:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.580 10:08:14 -- accel/accel.sh@42 -- # jq -r . 00:07:55.580 [2024-11-19 10:08:14.760397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:55.580 [2024-11-19 10:08:14.760493] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70940 ] 00:07:55.580 [2024-11-19 10:08:14.897897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.580 [2024-11-19 10:08:14.933408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val= 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val= 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val= 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val=0x1 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val= 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val= 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val=decompress 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val= 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val=software 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val=32 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val=32 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val=1 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val=Yes 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val= 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:55.580 10:08:14 -- accel/accel.sh@21 -- # val= 00:07:55.580 10:08:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # IFS=: 00:07:55.580 10:08:14 -- accel/accel.sh@20 -- # read -r var val 00:07:56.514 10:08:16 -- accel/accel.sh@21 -- # val= 00:07:56.514 10:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.514 10:08:16 -- accel/accel.sh@20 -- # IFS=: 00:07:56.514 10:08:16 -- accel/accel.sh@20 -- # read -r var val 00:07:56.514 10:08:16 -- accel/accel.sh@21 -- # val= 00:07:56.514 10:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.514 10:08:16 -- accel/accel.sh@20 -- # IFS=: 00:07:56.514 10:08:16 -- accel/accel.sh@20 -- # read -r var val 00:07:56.514 10:08:16 -- accel/accel.sh@21 -- # val= 00:07:56.775 10:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.775 10:08:16 -- accel/accel.sh@20 -- # IFS=: 00:07:56.775 10:08:16 -- accel/accel.sh@20 -- # read -r var val 00:07:56.775 10:08:16 -- accel/accel.sh@21 -- # val= 00:07:56.775 10:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.775 10:08:16 -- accel/accel.sh@20 -- # IFS=: 00:07:56.775 10:08:16 -- accel/accel.sh@20 -- # read -r var val 00:07:56.775 10:08:16 -- accel/accel.sh@21 -- # val= 00:07:56.775 10:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.775 10:08:16 -- accel/accel.sh@20 -- # IFS=: 00:07:56.775 10:08:16 -- accel/accel.sh@20 -- # read -r var val 00:07:56.775 10:08:16 -- accel/accel.sh@21 -- # val= 00:07:56.775 10:08:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.775 10:08:16 -- accel/accel.sh@20 -- # IFS=: 00:07:56.775 10:08:16 -- accel/accel.sh@20 -- # read -r var val 00:07:56.775 10:08:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:56.775 10:08:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:56.775 10:08:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.775 00:07:56.775 real 0m2.652s 00:07:56.775 user 0m2.301s 00:07:56.775 sys 0m0.146s 00:07:56.775 10:08:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.775 10:08:16 -- common/autotest_common.sh@10 -- # set +x 00:07:56.775 ************************************ 00:07:56.775 END TEST accel_decomp 00:07:56.775 ************************************ 00:07:56.775 10:08:16 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.775 10:08:16 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:56.775 10:08:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.775 10:08:16 -- common/autotest_common.sh@10 -- # set +x 00:07:56.775 ************************************ 00:07:56.775 START TEST accel_decmop_full 00:07:56.775 ************************************ 00:07:56.775 10:08:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.775 10:08:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:56.775 10:08:16 -- accel/accel.sh@17 -- # local accel_module 00:07:56.775 10:08:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.775 10:08:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.775 10:08:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.775 10:08:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.775 10:08:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.775 10:08:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.775 10:08:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.775 10:08:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.775 10:08:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.775 10:08:16 -- accel/accel.sh@42 -- # jq -r . 00:07:56.775 [2024-11-19 10:08:16.130128] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:56.775 [2024-11-19 10:08:16.130292] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70975 ] 00:07:56.775 [2024-11-19 10:08:16.266159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.775 [2024-11-19 10:08:16.308250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.195 10:08:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:58.195 00:07:58.195 SPDK Configuration: 00:07:58.195 Core mask: 0x1 00:07:58.195 00:07:58.195 Accel Perf Configuration: 00:07:58.195 Workload Type: decompress 00:07:58.195 Transfer size: 111250 bytes 00:07:58.195 Vector count 1 00:07:58.195 Module: software 00:07:58.195 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:58.195 Queue depth: 32 00:07:58.195 Allocate depth: 32 00:07:58.195 # threads/core: 1 00:07:58.195 Run time: 1 seconds 00:07:58.195 Verify: Yes 00:07:58.195 00:07:58.195 Running for 1 seconds... 00:07:58.195 00:07:58.195 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:58.195 ------------------------------------------------------------------------------------ 00:07:58.195 0,0 4352/s 179 MiB/s 0 0 00:07:58.195 ==================================================================================== 00:07:58.195 Total 4352/s 461 MiB/s 0 0' 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.195 10:08:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:58.195 10:08:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:58.195 10:08:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.195 10:08:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.195 10:08:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:58.195 10:08:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:58.195 10:08:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:58.195 10:08:17 -- accel/accel.sh@42 -- # jq -r . 00:07:58.195 [2024-11-19 10:08:17.479380] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:58.195 [2024-11-19 10:08:17.479470] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70989 ] 00:07:58.195 [2024-11-19 10:08:17.616162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.195 [2024-11-19 10:08:17.651215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val= 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val= 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val= 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val=0x1 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val= 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val= 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val=decompress 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val= 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val=software 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val=32 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val=32 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val=1 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val=Yes 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val= 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:58.195 10:08:17 -- accel/accel.sh@21 -- # val= 00:07:58.195 10:08:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # IFS=: 00:07:58.195 10:08:17 -- accel/accel.sh@20 -- # read -r var val 00:07:59.573 10:08:18 -- accel/accel.sh@21 -- # val= 00:07:59.573 10:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.573 10:08:18 -- accel/accel.sh@20 -- # IFS=: 00:07:59.573 10:08:18 -- accel/accel.sh@20 -- # read -r var val 00:07:59.573 10:08:18 -- accel/accel.sh@21 -- # val= 00:07:59.573 10:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.573 10:08:18 -- accel/accel.sh@20 -- # IFS=: 00:07:59.573 10:08:18 -- accel/accel.sh@20 -- # read -r var val 00:07:59.573 10:08:18 -- accel/accel.sh@21 -- # val= 00:07:59.573 10:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.573 10:08:18 -- accel/accel.sh@20 -- # IFS=: 00:07:59.573 10:08:18 -- accel/accel.sh@20 -- # read -r var val 00:07:59.573 10:08:18 -- accel/accel.sh@21 -- # val= 00:07:59.573 10:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.573 10:08:18 -- accel/accel.sh@20 -- # IFS=: 00:07:59.573 10:08:18 -- accel/accel.sh@20 -- # read -r var val 00:07:59.573 10:08:18 -- accel/accel.sh@21 -- # val= 00:07:59.573 10:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.573 10:08:18 -- accel/accel.sh@20 -- # IFS=: 00:07:59.573 10:08:18 -- accel/accel.sh@20 -- # read -r var val 00:07:59.573 10:08:18 -- accel/accel.sh@21 -- # val= 00:07:59.573 10:08:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.573 10:08:18 -- accel/accel.sh@20 -- # IFS=: 00:07:59.573 10:08:18 -- accel/accel.sh@20 -- # read -r var val 00:07:59.573 10:08:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:59.573 10:08:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:59.573 10:08:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.573 00:07:59.573 real 0m2.694s 00:07:59.573 user 0m2.325s 00:07:59.573 sys 0m0.168s 00:07:59.573 10:08:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.573 10:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:59.573 ************************************ 00:07:59.573 END TEST accel_decmop_full 00:07:59.573 ************************************ 00:07:59.573 10:08:18 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.573 10:08:18 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:59.573 10:08:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.573 10:08:18 -- common/autotest_common.sh@10 -- # set +x 00:07:59.573 ************************************ 00:07:59.573 START TEST accel_decomp_mcore 00:07:59.573 ************************************ 00:07:59.573 10:08:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.573 10:08:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:59.573 10:08:18 -- accel/accel.sh@17 -- # local accel_module 00:07:59.573 10:08:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.573 10:08:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.573 10:08:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.573 10:08:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.573 10:08:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.573 10:08:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.573 10:08:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.573 10:08:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.573 10:08:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.573 10:08:18 -- accel/accel.sh@42 -- # jq -r . 00:07:59.573 [2024-11-19 10:08:18.870624] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:59.573 [2024-11-19 10:08:18.870744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71023 ] 00:07:59.573 [2024-11-19 10:08:19.011594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.573 [2024-11-19 10:08:19.049718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.573 [2024-11-19 10:08:19.049869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.573 [2024-11-19 10:08:19.050007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.573 [2024-11-19 10:08:19.050011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.951 10:08:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:00.951 00:08:00.951 SPDK Configuration: 00:08:00.951 Core mask: 0xf 00:08:00.951 00:08:00.951 Accel Perf Configuration: 00:08:00.951 Workload Type: decompress 00:08:00.951 Transfer size: 4096 bytes 00:08:00.951 Vector count 1 00:08:00.951 Module: software 00:08:00.951 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:00.951 Queue depth: 32 00:08:00.951 Allocate depth: 32 00:08:00.951 # threads/core: 1 00:08:00.951 Run time: 1 seconds 00:08:00.951 Verify: Yes 00:08:00.951 00:08:00.951 Running for 1 seconds... 00:08:00.951 00:08:00.951 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:00.951 ------------------------------------------------------------------------------------ 00:08:00.951 0,0 56736/s 104 MiB/s 0 0 00:08:00.951 3,0 52320/s 96 MiB/s 0 0 00:08:00.951 2,0 56992/s 105 MiB/s 0 0 00:08:00.951 1,0 54944/s 101 MiB/s 0 0 00:08:00.951 ==================================================================================== 00:08:00.951 Total 220992/s 863 MiB/s 0 0' 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.951 10:08:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.951 10:08:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:00.951 10:08:20 -- accel/accel.sh@12 -- # build_accel_config 00:08:00.951 10:08:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:00.951 10:08:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.951 10:08:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.951 10:08:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:00.951 10:08:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:00.951 10:08:20 -- accel/accel.sh@41 -- # local IFS=, 00:08:00.951 10:08:20 -- accel/accel.sh@42 -- # jq -r . 00:08:00.951 [2024-11-19 10:08:20.212386] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:00.951 [2024-11-19 10:08:20.212501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71046 ] 00:08:00.951 [2024-11-19 10:08:20.350505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.951 [2024-11-19 10:08:20.392422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.951 [2024-11-19 10:08:20.392562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.951 [2024-11-19 10:08:20.392635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.951 [2024-11-19 10:08:20.392639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.951 10:08:20 -- accel/accel.sh@21 -- # val= 00:08:00.951 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.951 10:08:20 -- accel/accel.sh@21 -- # val= 00:08:00.951 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.951 10:08:20 -- accel/accel.sh@21 -- # val= 00:08:00.951 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.951 10:08:20 -- accel/accel.sh@21 -- # val=0xf 00:08:00.951 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.951 10:08:20 -- accel/accel.sh@21 -- # val= 00:08:00.951 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.951 10:08:20 -- accel/accel.sh@21 -- # val= 00:08:00.951 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.951 10:08:20 -- accel/accel.sh@21 -- # val=decompress 00:08:00.951 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.951 10:08:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.951 10:08:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:00.951 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.951 10:08:20 -- accel/accel.sh@21 -- # val= 00:08:00.951 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.951 10:08:20 -- accel/accel.sh@21 -- # val=software 00:08:00.951 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.951 10:08:20 -- accel/accel.sh@23 -- # accel_module=software 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.951 10:08:20 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:00.951 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.951 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.951 10:08:20 -- accel/accel.sh@21 -- # val=32 00:08:00.951 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.952 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.952 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.952 10:08:20 -- accel/accel.sh@21 -- # val=32 00:08:00.952 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.952 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.952 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.952 10:08:20 -- accel/accel.sh@21 -- # val=1 00:08:00.952 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.952 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.952 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.952 10:08:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:00.952 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.952 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.952 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.952 10:08:20 -- accel/accel.sh@21 -- # val=Yes 00:08:00.952 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.952 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.952 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.952 10:08:20 -- accel/accel.sh@21 -- # val= 00:08:00.952 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.952 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.952 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:00.952 10:08:20 -- accel/accel.sh@21 -- # val= 00:08:00.952 10:08:20 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.952 10:08:20 -- accel/accel.sh@20 -- # IFS=: 00:08:00.952 10:08:20 -- accel/accel.sh@20 -- # read -r var val 00:08:02.346 10:08:21 -- accel/accel.sh@21 -- # val= 00:08:02.346 10:08:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # IFS=: 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # read -r var val 00:08:02.346 10:08:21 -- accel/accel.sh@21 -- # val= 00:08:02.346 10:08:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # IFS=: 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # read -r var val 00:08:02.346 10:08:21 -- accel/accel.sh@21 -- # val= 00:08:02.346 10:08:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # IFS=: 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # read -r var val 00:08:02.346 10:08:21 -- accel/accel.sh@21 -- # val= 00:08:02.346 10:08:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # IFS=: 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # read -r var val 00:08:02.346 10:08:21 -- accel/accel.sh@21 -- # val= 00:08:02.346 10:08:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # IFS=: 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # read -r var val 00:08:02.346 10:08:21 -- accel/accel.sh@21 -- # val= 00:08:02.346 10:08:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # IFS=: 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # read -r var val 00:08:02.346 10:08:21 -- accel/accel.sh@21 -- # val= 00:08:02.346 10:08:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # IFS=: 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # read -r var val 00:08:02.346 10:08:21 -- accel/accel.sh@21 -- # val= 00:08:02.346 10:08:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # IFS=: 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # read -r var val 00:08:02.346 10:08:21 -- accel/accel.sh@21 -- # val= 00:08:02.346 10:08:21 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # IFS=: 00:08:02.346 10:08:21 -- accel/accel.sh@20 -- # read -r var val 00:08:02.346 10:08:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:02.346 10:08:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:02.346 10:08:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.346 00:08:02.346 real 0m2.687s 00:08:02.346 user 0m8.757s 00:08:02.347 sys 0m0.173s 00:08:02.347 10:08:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.347 10:08:21 -- common/autotest_common.sh@10 -- # set +x 00:08:02.347 ************************************ 00:08:02.347 END TEST accel_decomp_mcore 00:08:02.347 ************************************ 00:08:02.347 10:08:21 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:02.347 10:08:21 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:02.347 10:08:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.347 10:08:21 -- common/autotest_common.sh@10 -- # set +x 00:08:02.347 ************************************ 00:08:02.347 START TEST accel_decomp_full_mcore 00:08:02.347 ************************************ 00:08:02.347 10:08:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:02.347 10:08:21 -- accel/accel.sh@16 -- # local accel_opc 00:08:02.347 10:08:21 -- accel/accel.sh@17 -- # local accel_module 00:08:02.347 10:08:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:02.347 10:08:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:02.347 10:08:21 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.347 10:08:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:02.347 10:08:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.347 10:08:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.347 10:08:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:02.347 10:08:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:02.347 10:08:21 -- accel/accel.sh@41 -- # local IFS=, 00:08:02.347 10:08:21 -- accel/accel.sh@42 -- # jq -r . 00:08:02.347 [2024-11-19 10:08:21.604637] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:02.347 [2024-11-19 10:08:21.604719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71078 ] 00:08:02.347 [2024-11-19 10:08:21.738770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.347 [2024-11-19 10:08:21.777112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.347 [2024-11-19 10:08:21.777248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.347 [2024-11-19 10:08:21.778063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.347 [2024-11-19 10:08:21.778073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.779 10:08:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:03.779 00:08:03.779 SPDK Configuration: 00:08:03.779 Core mask: 0xf 00:08:03.779 00:08:03.779 Accel Perf Configuration: 00:08:03.779 Workload Type: decompress 00:08:03.779 Transfer size: 111250 bytes 00:08:03.779 Vector count 1 00:08:03.779 Module: software 00:08:03.779 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:03.779 Queue depth: 32 00:08:03.779 Allocate depth: 32 00:08:03.779 # threads/core: 1 00:08:03.779 Run time: 1 seconds 00:08:03.779 Verify: Yes 00:08:03.779 00:08:03.779 Running for 1 seconds... 00:08:03.779 00:08:03.779 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:03.779 ------------------------------------------------------------------------------------ 00:08:03.779 0,0 4320/s 178 MiB/s 0 0 00:08:03.779 3,0 4224/s 174 MiB/s 0 0 00:08:03.779 2,0 4320/s 178 MiB/s 0 0 00:08:03.779 1,0 3904/s 161 MiB/s 0 0 00:08:03.779 ==================================================================================== 00:08:03.779 Total 16768/s 1779 MiB/s 0 0' 00:08:03.779 10:08:22 -- accel/accel.sh@20 -- # IFS=: 00:08:03.779 10:08:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:03.779 10:08:22 -- accel/accel.sh@20 -- # read -r var val 00:08:03.779 10:08:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:03.779 10:08:22 -- accel/accel.sh@12 -- # build_accel_config 00:08:03.779 10:08:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:03.779 10:08:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.779 10:08:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.779 10:08:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:03.779 10:08:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:03.779 10:08:22 -- accel/accel.sh@41 -- # local IFS=, 00:08:03.779 10:08:22 -- accel/accel.sh@42 -- # jq -r . 00:08:03.779 [2024-11-19 10:08:22.946724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:03.779 [2024-11-19 10:08:22.946936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71100 ] 00:08:03.779 [2024-11-19 10:08:23.080119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.779 [2024-11-19 10:08:23.118020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.779 [2024-11-19 10:08:23.118144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.779 [2024-11-19 10:08:23.118263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.779 [2024-11-19 10:08:23.118270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.779 10:08:23 -- accel/accel.sh@21 -- # val= 00:08:03.779 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.779 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.779 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.779 10:08:23 -- accel/accel.sh@21 -- # val= 00:08:03.779 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.779 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.779 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.779 10:08:23 -- accel/accel.sh@21 -- # val= 00:08:03.779 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.779 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.779 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.779 10:08:23 -- accel/accel.sh@21 -- # val=0xf 00:08:03.779 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.779 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.779 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.779 10:08:23 -- accel/accel.sh@21 -- # val= 00:08:03.779 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.779 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.779 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.779 10:08:23 -- accel/accel.sh@21 -- # val= 00:08:03.779 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.779 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.779 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.779 10:08:23 -- accel/accel.sh@21 -- # val=decompress 00:08:03.779 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.779 10:08:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:03.779 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.779 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.779 10:08:23 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:03.779 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.779 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.779 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.779 10:08:23 -- accel/accel.sh@21 -- # val= 00:08:03.779 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.780 10:08:23 -- accel/accel.sh@21 -- # val=software 00:08:03.780 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.780 10:08:23 -- accel/accel.sh@23 -- # accel_module=software 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.780 10:08:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:03.780 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.780 10:08:23 -- accel/accel.sh@21 -- # val=32 00:08:03.780 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.780 10:08:23 -- accel/accel.sh@21 -- # val=32 00:08:03.780 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.780 10:08:23 -- accel/accel.sh@21 -- # val=1 00:08:03.780 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.780 10:08:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:03.780 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.780 10:08:23 -- accel/accel.sh@21 -- # val=Yes 00:08:03.780 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.780 10:08:23 -- accel/accel.sh@21 -- # val= 00:08:03.780 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:03.780 10:08:23 -- accel/accel.sh@21 -- # val= 00:08:03.780 10:08:23 -- accel/accel.sh@22 -- # case "$var" in 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # IFS=: 00:08:03.780 10:08:23 -- accel/accel.sh@20 -- # read -r var val 00:08:05.156 10:08:24 -- accel/accel.sh@21 -- # val= 00:08:05.156 10:08:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # IFS=: 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # read -r var val 00:08:05.156 10:08:24 -- accel/accel.sh@21 -- # val= 00:08:05.156 10:08:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # IFS=: 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # read -r var val 00:08:05.156 10:08:24 -- accel/accel.sh@21 -- # val= 00:08:05.156 10:08:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # IFS=: 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # read -r var val 00:08:05.156 10:08:24 -- accel/accel.sh@21 -- # val= 00:08:05.156 10:08:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # IFS=: 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # read -r var val 00:08:05.156 10:08:24 -- accel/accel.sh@21 -- # val= 00:08:05.156 10:08:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # IFS=: 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # read -r var val 00:08:05.156 10:08:24 -- accel/accel.sh@21 -- # val= 00:08:05.156 10:08:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # IFS=: 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # read -r var val 00:08:05.156 10:08:24 -- accel/accel.sh@21 -- # val= 00:08:05.156 10:08:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # IFS=: 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # read -r var val 00:08:05.156 10:08:24 -- accel/accel.sh@21 -- # val= 00:08:05.156 10:08:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # IFS=: 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # read -r var val 00:08:05.156 10:08:24 -- accel/accel.sh@21 -- # val= 00:08:05.156 10:08:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # IFS=: 00:08:05.156 10:08:24 -- accel/accel.sh@20 -- # read -r var val 00:08:05.156 10:08:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:05.156 10:08:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:05.156 10:08:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.157 00:08:05.157 real 0m2.703s 00:08:05.157 user 0m8.878s 00:08:05.157 sys 0m0.166s 00:08:05.157 10:08:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:05.157 10:08:24 -- common/autotest_common.sh@10 -- # set +x 00:08:05.157 ************************************ 00:08:05.157 END TEST accel_decomp_full_mcore 00:08:05.157 ************************************ 00:08:05.157 10:08:24 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:05.157 10:08:24 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:05.157 10:08:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.157 10:08:24 -- common/autotest_common.sh@10 -- # set +x 00:08:05.157 ************************************ 00:08:05.157 START TEST accel_decomp_mthread 00:08:05.157 ************************************ 00:08:05.157 10:08:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:05.157 10:08:24 -- accel/accel.sh@16 -- # local accel_opc 00:08:05.157 10:08:24 -- accel/accel.sh@17 -- # local accel_module 00:08:05.157 10:08:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:05.157 10:08:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:05.157 10:08:24 -- accel/accel.sh@12 -- # build_accel_config 00:08:05.157 10:08:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:05.157 10:08:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.157 10:08:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.157 10:08:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:05.157 10:08:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:05.157 10:08:24 -- accel/accel.sh@41 -- # local IFS=, 00:08:05.157 10:08:24 -- accel/accel.sh@42 -- # jq -r . 00:08:05.157 [2024-11-19 10:08:24.353508] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:05.157 [2024-11-19 10:08:24.353645] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71138 ] 00:08:05.157 [2024-11-19 10:08:24.502094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.157 [2024-11-19 10:08:24.541899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.535 10:08:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:06.535 00:08:06.535 SPDK Configuration: 00:08:06.535 Core mask: 0x1 00:08:06.535 00:08:06.535 Accel Perf Configuration: 00:08:06.535 Workload Type: decompress 00:08:06.535 Transfer size: 4096 bytes 00:08:06.535 Vector count 1 00:08:06.535 Module: software 00:08:06.535 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:06.535 Queue depth: 32 00:08:06.535 Allocate depth: 32 00:08:06.535 # threads/core: 2 00:08:06.535 Run time: 1 seconds 00:08:06.535 Verify: Yes 00:08:06.535 00:08:06.535 Running for 1 seconds... 00:08:06.535 00:08:06.535 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:06.535 ------------------------------------------------------------------------------------ 00:08:06.535 0,1 31296/s 57 MiB/s 0 0 00:08:06.535 0,0 31136/s 57 MiB/s 0 0 00:08:06.535 ==================================================================================== 00:08:06.535 Total 62432/s 243 MiB/s 0 0' 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:06.535 10:08:25 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.535 10:08:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:06.535 10:08:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.535 10:08:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.535 10:08:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:06.535 10:08:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:06.535 10:08:25 -- accel/accel.sh@41 -- # local IFS=, 00:08:06.535 10:08:25 -- accel/accel.sh@42 -- # jq -r . 00:08:06.535 [2024-11-19 10:08:25.699109] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:06.535 [2024-11-19 10:08:25.699198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71152 ] 00:08:06.535 [2024-11-19 10:08:25.829526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.535 [2024-11-19 10:08:25.864267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val= 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val= 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val= 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val=0x1 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val= 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val= 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val=decompress 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val= 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val=software 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@23 -- # accel_module=software 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val=32 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val=32 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val=2 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val=Yes 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val= 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:06.535 10:08:25 -- accel/accel.sh@21 -- # val= 00:08:06.535 10:08:25 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # IFS=: 00:08:06.535 10:08:25 -- accel/accel.sh@20 -- # read -r var val 00:08:07.471 10:08:26 -- accel/accel.sh@21 -- # val= 00:08:07.471 10:08:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.471 10:08:26 -- accel/accel.sh@20 -- # IFS=: 00:08:07.471 10:08:26 -- accel/accel.sh@20 -- # read -r var val 00:08:07.471 10:08:26 -- accel/accel.sh@21 -- # val= 00:08:07.471 10:08:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.471 10:08:26 -- accel/accel.sh@20 -- # IFS=: 00:08:07.471 10:08:26 -- accel/accel.sh@20 -- # read -r var val 00:08:07.471 10:08:27 -- accel/accel.sh@21 -- # val= 00:08:07.471 10:08:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.471 10:08:27 -- accel/accel.sh@20 -- # IFS=: 00:08:07.471 10:08:27 -- accel/accel.sh@20 -- # read -r var val 00:08:07.471 10:08:27 -- accel/accel.sh@21 -- # val= 00:08:07.471 10:08:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.471 10:08:27 -- accel/accel.sh@20 -- # IFS=: 00:08:07.471 10:08:27 -- accel/accel.sh@20 -- # read -r var val 00:08:07.471 10:08:27 -- accel/accel.sh@21 -- # val= 00:08:07.471 10:08:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.471 10:08:27 -- accel/accel.sh@20 -- # IFS=: 00:08:07.471 10:08:27 -- accel/accel.sh@20 -- # read -r var val 00:08:07.471 10:08:27 -- accel/accel.sh@21 -- # val= 00:08:07.471 10:08:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.471 10:08:27 -- accel/accel.sh@20 -- # IFS=: 00:08:07.471 10:08:27 -- accel/accel.sh@20 -- # read -r var val 00:08:07.471 10:08:27 -- accel/accel.sh@21 -- # val= 00:08:07.471 10:08:27 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.471 10:08:27 -- accel/accel.sh@20 -- # IFS=: 00:08:07.471 10:08:27 -- accel/accel.sh@20 -- # read -r var val 00:08:07.471 10:08:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:07.471 10:08:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:07.471 10:08:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.471 00:08:07.471 real 0m2.679s 00:08:07.471 user 0m2.318s 00:08:07.471 sys 0m0.158s 00:08:07.471 10:08:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.471 10:08:27 -- common/autotest_common.sh@10 -- # set +x 00:08:07.471 ************************************ 00:08:07.471 END TEST accel_decomp_mthread 00:08:07.471 ************************************ 00:08:07.729 10:08:27 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:07.730 10:08:27 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:07.730 10:08:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.730 10:08:27 -- common/autotest_common.sh@10 -- # set +x 00:08:07.730 ************************************ 00:08:07.730 START TEST accel_deomp_full_mthread 00:08:07.730 ************************************ 00:08:07.730 10:08:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:07.730 10:08:27 -- accel/accel.sh@16 -- # local accel_opc 00:08:07.730 10:08:27 -- accel/accel.sh@17 -- # local accel_module 00:08:07.730 10:08:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:07.730 10:08:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:07.730 10:08:27 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.730 10:08:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:07.730 10:08:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.730 10:08:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.730 10:08:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:07.730 10:08:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:07.730 10:08:27 -- accel/accel.sh@41 -- # local IFS=, 00:08:07.730 10:08:27 -- accel/accel.sh@42 -- # jq -r . 00:08:07.730 [2024-11-19 10:08:27.078287] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:07.730 [2024-11-19 10:08:27.078994] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71186 ] 00:08:07.730 [2024-11-19 10:08:27.217527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.730 [2024-11-19 10:08:27.255749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.105 10:08:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:09.105 00:08:09.105 SPDK Configuration: 00:08:09.105 Core mask: 0x1 00:08:09.105 00:08:09.105 Accel Perf Configuration: 00:08:09.105 Workload Type: decompress 00:08:09.105 Transfer size: 111250 bytes 00:08:09.105 Vector count 1 00:08:09.105 Module: software 00:08:09.105 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:09.105 Queue depth: 32 00:08:09.105 Allocate depth: 32 00:08:09.105 # threads/core: 2 00:08:09.105 Run time: 1 seconds 00:08:09.105 Verify: Yes 00:08:09.105 00:08:09.105 Running for 1 seconds... 00:08:09.105 00:08:09.105 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:09.105 ------------------------------------------------------------------------------------ 00:08:09.105 0,1 2176/s 89 MiB/s 0 0 00:08:09.105 0,0 2144/s 88 MiB/s 0 0 00:08:09.105 ==================================================================================== 00:08:09.105 Total 4320/s 458 MiB/s 0 0' 00:08:09.105 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.105 10:08:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:09.105 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.105 10:08:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:09.105 10:08:28 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.105 10:08:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:09.105 10:08:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.105 10:08:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.105 10:08:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:09.105 10:08:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:09.105 10:08:28 -- accel/accel.sh@41 -- # local IFS=, 00:08:09.105 10:08:28 -- accel/accel.sh@42 -- # jq -r . 00:08:09.105 [2024-11-19 10:08:28.446017] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:09.105 [2024-11-19 10:08:28.446125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71206 ] 00:08:09.105 [2024-11-19 10:08:28.581868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.105 [2024-11-19 10:08:28.615016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.105 10:08:28 -- accel/accel.sh@21 -- # val= 00:08:09.105 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.105 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.105 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.105 10:08:28 -- accel/accel.sh@21 -- # val= 00:08:09.105 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.105 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.105 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.105 10:08:28 -- accel/accel.sh@21 -- # val= 00:08:09.105 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.105 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.105 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.105 10:08:28 -- accel/accel.sh@21 -- # val=0x1 00:08:09.105 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.105 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.105 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.105 10:08:28 -- accel/accel.sh@21 -- # val= 00:08:09.105 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.105 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.105 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.105 10:08:28 -- accel/accel.sh@21 -- # val= 00:08:09.105 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.105 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.105 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.105 10:08:28 -- accel/accel.sh@21 -- # val=decompress 00:08:09.364 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.364 10:08:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.364 10:08:28 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:09.364 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.364 10:08:28 -- accel/accel.sh@21 -- # val= 00:08:09.364 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.364 10:08:28 -- accel/accel.sh@21 -- # val=software 00:08:09.364 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.364 10:08:28 -- accel/accel.sh@23 -- # accel_module=software 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.364 10:08:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:09.364 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.364 10:08:28 -- accel/accel.sh@21 -- # val=32 00:08:09.364 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.364 10:08:28 -- accel/accel.sh@21 -- # val=32 00:08:09.364 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.364 10:08:28 -- accel/accel.sh@21 -- # val=2 00:08:09.364 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.364 10:08:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:09.364 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.364 10:08:28 -- accel/accel.sh@21 -- # val=Yes 00:08:09.364 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.364 10:08:28 -- accel/accel.sh@21 -- # val= 00:08:09.364 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:09.364 10:08:28 -- accel/accel.sh@21 -- # val= 00:08:09.364 10:08:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # IFS=: 00:08:09.364 10:08:28 -- accel/accel.sh@20 -- # read -r var val 00:08:10.300 10:08:29 -- accel/accel.sh@21 -- # val= 00:08:10.300 10:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.300 10:08:29 -- accel/accel.sh@20 -- # IFS=: 00:08:10.300 10:08:29 -- accel/accel.sh@20 -- # read -r var val 00:08:10.300 10:08:29 -- accel/accel.sh@21 -- # val= 00:08:10.300 10:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.300 10:08:29 -- accel/accel.sh@20 -- # IFS=: 00:08:10.300 10:08:29 -- accel/accel.sh@20 -- # read -r var val 00:08:10.300 10:08:29 -- accel/accel.sh@21 -- # val= 00:08:10.300 10:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.300 10:08:29 -- accel/accel.sh@20 -- # IFS=: 00:08:10.300 10:08:29 -- accel/accel.sh@20 -- # read -r var val 00:08:10.300 10:08:29 -- accel/accel.sh@21 -- # val= 00:08:10.300 10:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.300 10:08:29 -- accel/accel.sh@20 -- # IFS=: 00:08:10.300 10:08:29 -- accel/accel.sh@20 -- # read -r var val 00:08:10.300 10:08:29 -- accel/accel.sh@21 -- # val= 00:08:10.300 10:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.300 10:08:29 -- accel/accel.sh@20 -- # IFS=: 00:08:10.300 10:08:29 -- accel/accel.sh@20 -- # read -r var val 00:08:10.300 10:08:29 -- accel/accel.sh@21 -- # val= 00:08:10.300 10:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.300 10:08:29 -- accel/accel.sh@20 -- # IFS=: 00:08:10.300 10:08:29 -- accel/accel.sh@20 -- # read -r var val 00:08:10.300 10:08:29 -- accel/accel.sh@21 -- # val= 00:08:10.300 10:08:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.300 10:08:29 -- accel/accel.sh@20 -- # IFS=: 00:08:10.300 10:08:29 -- accel/accel.sh@20 -- # read -r var val 00:08:10.300 10:08:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:10.300 10:08:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:10.300 10:08:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.300 00:08:10.300 real 0m2.712s 00:08:10.300 user 0m2.351s 00:08:10.300 sys 0m0.159s 00:08:10.300 10:08:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.300 10:08:29 -- common/autotest_common.sh@10 -- # set +x 00:08:10.300 ************************************ 00:08:10.300 END TEST accel_deomp_full_mthread 00:08:10.300 ************************************ 00:08:10.300 10:08:29 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:10.300 10:08:29 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:10.300 10:08:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:10.300 10:08:29 -- accel/accel.sh@129 -- # build_accel_config 00:08:10.300 10:08:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.300 10:08:29 -- common/autotest_common.sh@10 -- # set +x 00:08:10.300 10:08:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:10.300 10:08:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.300 10:08:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.300 10:08:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:10.300 10:08:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:10.300 10:08:29 -- accel/accel.sh@41 -- # local IFS=, 00:08:10.300 10:08:29 -- accel/accel.sh@42 -- # jq -r . 00:08:10.300 ************************************ 00:08:10.300 START TEST accel_dif_functional_tests 00:08:10.300 ************************************ 00:08:10.300 10:08:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:10.559 [2024-11-19 10:08:29.871404] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:10.559 [2024-11-19 10:08:29.872307] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71236 ] 00:08:10.559 [2024-11-19 10:08:30.015589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.559 [2024-11-19 10:08:30.056252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.559 [2024-11-19 10:08:30.056323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.559 [2024-11-19 10:08:30.056327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.817 00:08:10.817 00:08:10.817 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.817 http://cunit.sourceforge.net/ 00:08:10.817 00:08:10.817 00:08:10.817 Suite: accel_dif 00:08:10.817 Test: verify: DIF generated, GUARD check ...passed 00:08:10.817 Test: verify: DIF generated, APPTAG check ...passed 00:08:10.817 Test: verify: DIF generated, REFTAG check ...passed 00:08:10.817 Test: verify: DIF not generated, GUARD check ...passed 00:08:10.817 Test: verify: DIF not generated, APPTAG check ...[2024-11-19 10:08:30.107398] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:10.817 [2024-11-19 10:08:30.107470] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:10.817 [2024-11-19 10:08:30.107523] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:10.817 passed 00:08:10.817 Test: verify: DIF not generated, REFTAG check ...[2024-11-19 10:08:30.107877] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:10.817 [2024-11-19 10:08:30.107933] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:10.817 passed 00:08:10.817 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:10.817 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-19 10:08:30.107961] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:10.817 passed 00:08:10.817 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:10.817 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:10.817 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-11-19 10:08:30.108024] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:10.817 passed 00:08:10.817 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-19 10:08:30.108332] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:10.817 passed 00:08:10.817 Test: generate copy: DIF generated, GUARD check ...passed 00:08:10.818 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:10.818 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:10.818 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:10.818 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:10.818 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:10.818 Test: generate copy: iovecs-len validate ...passed 00:08:10.818 Test: generate copy: buffer alignment validate ...[2024-11-19 10:08:30.108653] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:10.818 passed 00:08:10.818 00:08:10.818 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.818 suites 1 1 n/a 0 0 00:08:10.818 tests 20 20 20 0 0 00:08:10.818 asserts 204 204 204 0 n/a 00:08:10.818 00:08:10.818 Elapsed time = 0.003 seconds 00:08:10.818 00:08:10.818 real 0m0.427s 00:08:10.818 user 0m0.483s 00:08:10.818 sys 0m0.112s 00:08:10.818 10:08:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.818 ************************************ 00:08:10.818 END TEST accel_dif_functional_tests 00:08:10.818 ************************************ 00:08:10.818 10:08:30 -- common/autotest_common.sh@10 -- # set +x 00:08:10.818 00:08:10.818 real 0m57.227s 00:08:10.818 user 1m2.286s 00:08:10.818 sys 0m4.429s 00:08:10.818 10:08:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.818 10:08:30 -- common/autotest_common.sh@10 -- # set +x 00:08:10.818 ************************************ 00:08:10.818 END TEST accel 00:08:10.818 ************************************ 00:08:10.818 10:08:30 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:10.818 10:08:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:10.818 10:08:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.818 10:08:30 -- common/autotest_common.sh@10 -- # set +x 00:08:10.818 ************************************ 00:08:10.818 START TEST accel_rpc 00:08:10.818 ************************************ 00:08:10.818 10:08:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:11.077 * Looking for test storage... 00:08:11.077 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:11.077 10:08:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:11.077 10:08:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:11.077 10:08:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:11.077 10:08:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:11.077 10:08:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:11.077 10:08:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:11.077 10:08:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:11.077 10:08:30 -- scripts/common.sh@335 -- # IFS=.-: 00:08:11.077 10:08:30 -- scripts/common.sh@335 -- # read -ra ver1 00:08:11.077 10:08:30 -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.077 10:08:30 -- scripts/common.sh@336 -- # read -ra ver2 00:08:11.077 10:08:30 -- scripts/common.sh@337 -- # local 'op=<' 00:08:11.077 10:08:30 -- scripts/common.sh@339 -- # ver1_l=2 00:08:11.077 10:08:30 -- scripts/common.sh@340 -- # ver2_l=1 00:08:11.077 10:08:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:11.077 10:08:30 -- scripts/common.sh@343 -- # case "$op" in 00:08:11.077 10:08:30 -- scripts/common.sh@344 -- # : 1 00:08:11.077 10:08:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:11.077 10:08:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.077 10:08:30 -- scripts/common.sh@364 -- # decimal 1 00:08:11.077 10:08:30 -- scripts/common.sh@352 -- # local d=1 00:08:11.077 10:08:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.077 10:08:30 -- scripts/common.sh@354 -- # echo 1 00:08:11.077 10:08:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:11.077 10:08:30 -- scripts/common.sh@365 -- # decimal 2 00:08:11.077 10:08:30 -- scripts/common.sh@352 -- # local d=2 00:08:11.077 10:08:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.077 10:08:30 -- scripts/common.sh@354 -- # echo 2 00:08:11.077 10:08:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:11.077 10:08:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:11.077 10:08:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:11.077 10:08:30 -- scripts/common.sh@367 -- # return 0 00:08:11.077 10:08:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.077 10:08:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:11.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.077 --rc genhtml_branch_coverage=1 00:08:11.077 --rc genhtml_function_coverage=1 00:08:11.077 --rc genhtml_legend=1 00:08:11.077 --rc geninfo_all_blocks=1 00:08:11.077 --rc geninfo_unexecuted_blocks=1 00:08:11.077 00:08:11.077 ' 00:08:11.077 10:08:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:11.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.077 --rc genhtml_branch_coverage=1 00:08:11.077 --rc genhtml_function_coverage=1 00:08:11.077 --rc genhtml_legend=1 00:08:11.077 --rc geninfo_all_blocks=1 00:08:11.077 --rc geninfo_unexecuted_blocks=1 00:08:11.077 00:08:11.077 ' 00:08:11.077 10:08:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:11.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.077 --rc genhtml_branch_coverage=1 00:08:11.077 --rc genhtml_function_coverage=1 00:08:11.077 --rc genhtml_legend=1 00:08:11.077 --rc geninfo_all_blocks=1 00:08:11.077 --rc geninfo_unexecuted_blocks=1 00:08:11.077 00:08:11.077 ' 00:08:11.077 10:08:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:11.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.077 --rc genhtml_branch_coverage=1 00:08:11.077 --rc genhtml_function_coverage=1 00:08:11.077 --rc genhtml_legend=1 00:08:11.077 --rc geninfo_all_blocks=1 00:08:11.077 --rc geninfo_unexecuted_blocks=1 00:08:11.077 00:08:11.077 ' 00:08:11.077 10:08:30 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:11.077 10:08:30 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71313 00:08:11.077 10:08:30 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:11.077 10:08:30 -- accel/accel_rpc.sh@15 -- # waitforlisten 71313 00:08:11.077 10:08:30 -- common/autotest_common.sh@829 -- # '[' -z 71313 ']' 00:08:11.077 10:08:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.077 10:08:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.077 10:08:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.077 10:08:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.077 10:08:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.077 [2024-11-19 10:08:30.570193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:11.077 [2024-11-19 10:08:30.570302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71313 ] 00:08:11.336 [2024-11-19 10:08:30.707058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.336 [2024-11-19 10:08:30.749134] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:11.336 [2024-11-19 10:08:30.749310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.336 10:08:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.336 10:08:30 -- common/autotest_common.sh@862 -- # return 0 00:08:11.336 10:08:30 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:11.337 10:08:30 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:11.337 10:08:30 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:11.337 10:08:30 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:11.337 10:08:30 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:11.337 10:08:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:11.337 10:08:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.337 10:08:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.337 ************************************ 00:08:11.337 START TEST accel_assign_opcode 00:08:11.337 ************************************ 00:08:11.337 10:08:30 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:08:11.337 10:08:30 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:11.337 10:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.337 10:08:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.337 [2024-11-19 10:08:30.813753] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:11.337 10:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.337 10:08:30 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:11.337 10:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.337 10:08:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.337 [2024-11-19 10:08:30.821734] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:11.337 10:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.337 10:08:30 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:11.337 10:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.337 10:08:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.596 10:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.596 10:08:30 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:11.596 10:08:30 -- accel/accel_rpc.sh@42 -- # grep software 00:08:11.596 10:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.596 10:08:30 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:11.596 10:08:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.596 10:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.596 software 00:08:11.596 ************************************ 00:08:11.596 END TEST accel_assign_opcode 00:08:11.596 ************************************ 00:08:11.596 00:08:11.596 real 0m0.199s 00:08:11.596 user 0m0.047s 00:08:11.596 sys 0m0.013s 00:08:11.596 10:08:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.596 10:08:31 -- common/autotest_common.sh@10 -- # set +x 00:08:11.596 10:08:31 -- accel/accel_rpc.sh@55 -- # killprocess 71313 00:08:11.596 10:08:31 -- common/autotest_common.sh@936 -- # '[' -z 71313 ']' 00:08:11.596 10:08:31 -- common/autotest_common.sh@940 -- # kill -0 71313 00:08:11.596 10:08:31 -- common/autotest_common.sh@941 -- # uname 00:08:11.596 10:08:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:11.596 10:08:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71313 00:08:11.596 killing process with pid 71313 00:08:11.596 10:08:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:11.596 10:08:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:11.596 10:08:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71313' 00:08:11.596 10:08:31 -- common/autotest_common.sh@955 -- # kill 71313 00:08:11.596 10:08:31 -- common/autotest_common.sh@960 -- # wait 71313 00:08:11.855 00:08:11.855 real 0m0.987s 00:08:11.855 user 0m0.986s 00:08:11.855 sys 0m0.320s 00:08:11.855 10:08:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.855 ************************************ 00:08:11.855 10:08:31 -- common/autotest_common.sh@10 -- # set +x 00:08:11.855 END TEST accel_rpc 00:08:11.855 ************************************ 00:08:11.855 10:08:31 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:11.855 10:08:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:11.855 10:08:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.855 10:08:31 -- common/autotest_common.sh@10 -- # set +x 00:08:11.855 ************************************ 00:08:11.855 START TEST app_cmdline 00:08:11.855 ************************************ 00:08:11.855 10:08:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:12.116 * Looking for test storage... 00:08:12.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:12.116 10:08:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:12.116 10:08:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:12.116 10:08:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:12.116 10:08:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:12.116 10:08:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:12.116 10:08:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:12.116 10:08:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:12.116 10:08:31 -- scripts/common.sh@335 -- # IFS=.-: 00:08:12.116 10:08:31 -- scripts/common.sh@335 -- # read -ra ver1 00:08:12.116 10:08:31 -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.116 10:08:31 -- scripts/common.sh@336 -- # read -ra ver2 00:08:12.116 10:08:31 -- scripts/common.sh@337 -- # local 'op=<' 00:08:12.116 10:08:31 -- scripts/common.sh@339 -- # ver1_l=2 00:08:12.116 10:08:31 -- scripts/common.sh@340 -- # ver2_l=1 00:08:12.116 10:08:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:12.116 10:08:31 -- scripts/common.sh@343 -- # case "$op" in 00:08:12.116 10:08:31 -- scripts/common.sh@344 -- # : 1 00:08:12.116 10:08:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:12.116 10:08:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.116 10:08:31 -- scripts/common.sh@364 -- # decimal 1 00:08:12.116 10:08:31 -- scripts/common.sh@352 -- # local d=1 00:08:12.116 10:08:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.116 10:08:31 -- scripts/common.sh@354 -- # echo 1 00:08:12.116 10:08:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:12.116 10:08:31 -- scripts/common.sh@365 -- # decimal 2 00:08:12.116 10:08:31 -- scripts/common.sh@352 -- # local d=2 00:08:12.116 10:08:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.116 10:08:31 -- scripts/common.sh@354 -- # echo 2 00:08:12.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.116 10:08:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:12.116 10:08:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:12.116 10:08:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:12.116 10:08:31 -- scripts/common.sh@367 -- # return 0 00:08:12.116 10:08:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.116 10:08:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:12.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.116 --rc genhtml_branch_coverage=1 00:08:12.116 --rc genhtml_function_coverage=1 00:08:12.116 --rc genhtml_legend=1 00:08:12.116 --rc geninfo_all_blocks=1 00:08:12.116 --rc geninfo_unexecuted_blocks=1 00:08:12.116 00:08:12.116 ' 00:08:12.116 10:08:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:12.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.116 --rc genhtml_branch_coverage=1 00:08:12.116 --rc genhtml_function_coverage=1 00:08:12.116 --rc genhtml_legend=1 00:08:12.116 --rc geninfo_all_blocks=1 00:08:12.116 --rc geninfo_unexecuted_blocks=1 00:08:12.116 00:08:12.116 ' 00:08:12.116 10:08:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:12.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.116 --rc genhtml_branch_coverage=1 00:08:12.116 --rc genhtml_function_coverage=1 00:08:12.116 --rc genhtml_legend=1 00:08:12.116 --rc geninfo_all_blocks=1 00:08:12.116 --rc geninfo_unexecuted_blocks=1 00:08:12.116 00:08:12.116 ' 00:08:12.116 10:08:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:12.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.116 --rc genhtml_branch_coverage=1 00:08:12.116 --rc genhtml_function_coverage=1 00:08:12.116 --rc genhtml_legend=1 00:08:12.116 --rc geninfo_all_blocks=1 00:08:12.116 --rc geninfo_unexecuted_blocks=1 00:08:12.116 00:08:12.116 ' 00:08:12.116 10:08:31 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:12.116 10:08:31 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71412 00:08:12.116 10:08:31 -- app/cmdline.sh@18 -- # waitforlisten 71412 00:08:12.116 10:08:31 -- common/autotest_common.sh@829 -- # '[' -z 71412 ']' 00:08:12.116 10:08:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.116 10:08:31 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:12.116 10:08:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:12.116 10:08:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.116 10:08:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:12.116 10:08:31 -- common/autotest_common.sh@10 -- # set +x 00:08:12.116 [2024-11-19 10:08:31.573445] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:12.116 [2024-11-19 10:08:31.573545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71412 ] 00:08:12.375 [2024-11-19 10:08:31.736516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.375 [2024-11-19 10:08:31.788725] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:12.375 [2024-11-19 10:08:31.789138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.309 10:08:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:13.309 10:08:32 -- common/autotest_common.sh@862 -- # return 0 00:08:13.309 10:08:32 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:13.568 { 00:08:13.568 "fields": { 00:08:13.568 "commit": "c13c99a5e", 00:08:13.568 "major": 24, 00:08:13.568 "minor": 1, 00:08:13.568 "patch": 1, 00:08:13.568 "suffix": "-pre" 00:08:13.568 }, 00:08:13.568 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e" 00:08:13.568 } 00:08:13.568 10:08:32 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:13.568 10:08:32 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:13.568 10:08:32 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:13.568 10:08:32 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:13.568 10:08:32 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:13.568 10:08:32 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:13.568 10:08:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.569 10:08:32 -- common/autotest_common.sh@10 -- # set +x 00:08:13.569 10:08:32 -- app/cmdline.sh@26 -- # sort 00:08:13.569 10:08:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.569 10:08:32 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:13.569 10:08:32 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:13.569 10:08:32 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:13.569 10:08:32 -- common/autotest_common.sh@650 -- # local es=0 00:08:13.569 10:08:32 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:13.569 10:08:32 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.569 10:08:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.569 10:08:32 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.569 10:08:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.569 10:08:32 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.569 10:08:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:13.569 10:08:32 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.569 10:08:32 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:13.569 10:08:32 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:13.827 2024/11/19 10:08:33 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:08:13.827 request: 00:08:13.827 { 00:08:13.827 "method": "env_dpdk_get_mem_stats", 00:08:13.827 "params": {} 00:08:13.827 } 00:08:13.827 Got JSON-RPC error response 00:08:13.827 GoRPCClient: error on JSON-RPC call 00:08:13.827 10:08:33 -- common/autotest_common.sh@653 -- # es=1 00:08:13.827 10:08:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:13.827 10:08:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:13.827 10:08:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:13.827 10:08:33 -- app/cmdline.sh@1 -- # killprocess 71412 00:08:13.827 10:08:33 -- common/autotest_common.sh@936 -- # '[' -z 71412 ']' 00:08:13.827 10:08:33 -- common/autotest_common.sh@940 -- # kill -0 71412 00:08:13.827 10:08:33 -- common/autotest_common.sh@941 -- # uname 00:08:13.827 10:08:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:13.827 10:08:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71412 00:08:13.827 10:08:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:13.827 10:08:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:13.827 killing process with pid 71412 00:08:13.827 10:08:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71412' 00:08:13.827 10:08:33 -- common/autotest_common.sh@955 -- # kill 71412 00:08:13.827 10:08:33 -- common/autotest_common.sh@960 -- # wait 71412 00:08:14.086 00:08:14.086 real 0m2.127s 00:08:14.086 user 0m2.846s 00:08:14.086 sys 0m0.406s 00:08:14.086 10:08:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.086 10:08:33 -- common/autotest_common.sh@10 -- # set +x 00:08:14.086 ************************************ 00:08:14.086 END TEST app_cmdline 00:08:14.086 ************************************ 00:08:14.086 10:08:33 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:14.086 10:08:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:14.086 10:08:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.086 10:08:33 -- common/autotest_common.sh@10 -- # set +x 00:08:14.086 ************************************ 00:08:14.086 START TEST version 00:08:14.086 ************************************ 00:08:14.086 10:08:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:14.086 * Looking for test storage... 00:08:14.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:14.086 10:08:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:14.086 10:08:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:14.086 10:08:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:14.346 10:08:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:14.346 10:08:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:14.346 10:08:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:14.346 10:08:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:14.346 10:08:33 -- scripts/common.sh@335 -- # IFS=.-: 00:08:14.346 10:08:33 -- scripts/common.sh@335 -- # read -ra ver1 00:08:14.346 10:08:33 -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.346 10:08:33 -- scripts/common.sh@336 -- # read -ra ver2 00:08:14.346 10:08:33 -- scripts/common.sh@337 -- # local 'op=<' 00:08:14.346 10:08:33 -- scripts/common.sh@339 -- # ver1_l=2 00:08:14.346 10:08:33 -- scripts/common.sh@340 -- # ver2_l=1 00:08:14.346 10:08:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:14.346 10:08:33 -- scripts/common.sh@343 -- # case "$op" in 00:08:14.346 10:08:33 -- scripts/common.sh@344 -- # : 1 00:08:14.346 10:08:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:14.346 10:08:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.346 10:08:33 -- scripts/common.sh@364 -- # decimal 1 00:08:14.346 10:08:33 -- scripts/common.sh@352 -- # local d=1 00:08:14.346 10:08:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.346 10:08:33 -- scripts/common.sh@354 -- # echo 1 00:08:14.346 10:08:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:14.346 10:08:33 -- scripts/common.sh@365 -- # decimal 2 00:08:14.346 10:08:33 -- scripts/common.sh@352 -- # local d=2 00:08:14.346 10:08:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.346 10:08:33 -- scripts/common.sh@354 -- # echo 2 00:08:14.346 10:08:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:14.346 10:08:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:14.346 10:08:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:14.346 10:08:33 -- scripts/common.sh@367 -- # return 0 00:08:14.346 10:08:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.346 10:08:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:14.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.346 --rc genhtml_branch_coverage=1 00:08:14.346 --rc genhtml_function_coverage=1 00:08:14.346 --rc genhtml_legend=1 00:08:14.346 --rc geninfo_all_blocks=1 00:08:14.346 --rc geninfo_unexecuted_blocks=1 00:08:14.346 00:08:14.346 ' 00:08:14.346 10:08:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:14.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.346 --rc genhtml_branch_coverage=1 00:08:14.346 --rc genhtml_function_coverage=1 00:08:14.346 --rc genhtml_legend=1 00:08:14.346 --rc geninfo_all_blocks=1 00:08:14.346 --rc geninfo_unexecuted_blocks=1 00:08:14.346 00:08:14.346 ' 00:08:14.346 10:08:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:14.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.346 --rc genhtml_branch_coverage=1 00:08:14.346 --rc genhtml_function_coverage=1 00:08:14.346 --rc genhtml_legend=1 00:08:14.346 --rc geninfo_all_blocks=1 00:08:14.346 --rc geninfo_unexecuted_blocks=1 00:08:14.346 00:08:14.346 ' 00:08:14.346 10:08:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:14.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.346 --rc genhtml_branch_coverage=1 00:08:14.346 --rc genhtml_function_coverage=1 00:08:14.346 --rc genhtml_legend=1 00:08:14.346 --rc geninfo_all_blocks=1 00:08:14.346 --rc geninfo_unexecuted_blocks=1 00:08:14.346 00:08:14.346 ' 00:08:14.346 10:08:33 -- app/version.sh@17 -- # get_header_version major 00:08:14.346 10:08:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:14.346 10:08:33 -- app/version.sh@14 -- # cut -f2 00:08:14.346 10:08:33 -- app/version.sh@14 -- # tr -d '"' 00:08:14.346 10:08:33 -- app/version.sh@17 -- # major=24 00:08:14.346 10:08:33 -- app/version.sh@18 -- # get_header_version minor 00:08:14.346 10:08:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:14.346 10:08:33 -- app/version.sh@14 -- # cut -f2 00:08:14.346 10:08:33 -- app/version.sh@14 -- # tr -d '"' 00:08:14.346 10:08:33 -- app/version.sh@18 -- # minor=1 00:08:14.346 10:08:33 -- app/version.sh@19 -- # get_header_version patch 00:08:14.346 10:08:33 -- app/version.sh@14 -- # cut -f2 00:08:14.346 10:08:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:14.346 10:08:33 -- app/version.sh@14 -- # tr -d '"' 00:08:14.346 10:08:33 -- app/version.sh@19 -- # patch=1 00:08:14.346 10:08:33 -- app/version.sh@20 -- # get_header_version suffix 00:08:14.346 10:08:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:14.346 10:08:33 -- app/version.sh@14 -- # cut -f2 00:08:14.346 10:08:33 -- app/version.sh@14 -- # tr -d '"' 00:08:14.346 10:08:33 -- app/version.sh@20 -- # suffix=-pre 00:08:14.346 10:08:33 -- app/version.sh@22 -- # version=24.1 00:08:14.346 10:08:33 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:14.346 10:08:33 -- app/version.sh@25 -- # version=24.1.1 00:08:14.346 10:08:33 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:14.346 10:08:33 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:14.346 10:08:33 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:14.346 10:08:33 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:14.346 10:08:33 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:14.346 00:08:14.346 real 0m0.226s 00:08:14.346 user 0m0.141s 00:08:14.346 sys 0m0.118s 00:08:14.346 10:08:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.346 10:08:33 -- common/autotest_common.sh@10 -- # set +x 00:08:14.346 ************************************ 00:08:14.346 END TEST version 00:08:14.346 ************************************ 00:08:14.346 10:08:33 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:08:14.346 10:08:33 -- spdk/autotest.sh@191 -- # uname -s 00:08:14.346 10:08:33 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:08:14.346 10:08:33 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:14.346 10:08:33 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:14.346 10:08:33 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:08:14.346 10:08:33 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:08:14.346 10:08:33 -- spdk/autotest.sh@255 -- # timing_exit lib 00:08:14.346 10:08:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:14.346 10:08:33 -- common/autotest_common.sh@10 -- # set +x 00:08:14.346 10:08:33 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:08:14.346 10:08:33 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:08:14.346 10:08:33 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:08:14.346 10:08:33 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:08:14.346 10:08:33 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:08:14.346 10:08:33 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:08:14.346 10:08:33 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:14.346 10:08:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:14.346 10:08:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.346 10:08:33 -- common/autotest_common.sh@10 -- # set +x 00:08:14.346 ************************************ 00:08:14.346 START TEST nvmf_tcp 00:08:14.346 ************************************ 00:08:14.346 10:08:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:14.710 * Looking for test storage... 00:08:14.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:14.710 10:08:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:14.710 10:08:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:14.710 10:08:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:14.710 10:08:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:14.710 10:08:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:14.710 10:08:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:14.710 10:08:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:14.710 10:08:33 -- scripts/common.sh@335 -- # IFS=.-: 00:08:14.710 10:08:33 -- scripts/common.sh@335 -- # read -ra ver1 00:08:14.710 10:08:33 -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.710 10:08:33 -- scripts/common.sh@336 -- # read -ra ver2 00:08:14.710 10:08:33 -- scripts/common.sh@337 -- # local 'op=<' 00:08:14.710 10:08:33 -- scripts/common.sh@339 -- # ver1_l=2 00:08:14.710 10:08:33 -- scripts/common.sh@340 -- # ver2_l=1 00:08:14.710 10:08:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:14.710 10:08:33 -- scripts/common.sh@343 -- # case "$op" in 00:08:14.710 10:08:33 -- scripts/common.sh@344 -- # : 1 00:08:14.710 10:08:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:14.710 10:08:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.710 10:08:33 -- scripts/common.sh@364 -- # decimal 1 00:08:14.710 10:08:33 -- scripts/common.sh@352 -- # local d=1 00:08:14.710 10:08:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.710 10:08:33 -- scripts/common.sh@354 -- # echo 1 00:08:14.710 10:08:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:14.710 10:08:33 -- scripts/common.sh@365 -- # decimal 2 00:08:14.710 10:08:33 -- scripts/common.sh@352 -- # local d=2 00:08:14.710 10:08:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.710 10:08:33 -- scripts/common.sh@354 -- # echo 2 00:08:14.710 10:08:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:14.710 10:08:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:14.710 10:08:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:14.710 10:08:33 -- scripts/common.sh@367 -- # return 0 00:08:14.710 10:08:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.710 10:08:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:14.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.710 --rc genhtml_branch_coverage=1 00:08:14.710 --rc genhtml_function_coverage=1 00:08:14.710 --rc genhtml_legend=1 00:08:14.710 --rc geninfo_all_blocks=1 00:08:14.710 --rc geninfo_unexecuted_blocks=1 00:08:14.710 00:08:14.710 ' 00:08:14.710 10:08:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:14.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.710 --rc genhtml_branch_coverage=1 00:08:14.710 --rc genhtml_function_coverage=1 00:08:14.710 --rc genhtml_legend=1 00:08:14.710 --rc geninfo_all_blocks=1 00:08:14.710 --rc geninfo_unexecuted_blocks=1 00:08:14.710 00:08:14.710 ' 00:08:14.711 10:08:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:14.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.711 --rc genhtml_branch_coverage=1 00:08:14.711 --rc genhtml_function_coverage=1 00:08:14.711 --rc genhtml_legend=1 00:08:14.711 --rc geninfo_all_blocks=1 00:08:14.711 --rc geninfo_unexecuted_blocks=1 00:08:14.711 00:08:14.711 ' 00:08:14.711 10:08:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:14.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.711 --rc genhtml_branch_coverage=1 00:08:14.711 --rc genhtml_function_coverage=1 00:08:14.711 --rc genhtml_legend=1 00:08:14.711 --rc geninfo_all_blocks=1 00:08:14.711 --rc geninfo_unexecuted_blocks=1 00:08:14.711 00:08:14.711 ' 00:08:14.711 10:08:34 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:14.711 10:08:34 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:14.711 10:08:34 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.711 10:08:34 -- nvmf/common.sh@7 -- # uname -s 00:08:14.711 10:08:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.711 10:08:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.711 10:08:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.711 10:08:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.711 10:08:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.711 10:08:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.711 10:08:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.711 10:08:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.711 10:08:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.711 10:08:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.711 10:08:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:08:14.711 10:08:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:08:14.711 10:08:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.711 10:08:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.711 10:08:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:14.711 10:08:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.711 10:08:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.711 10:08:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.711 10:08:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.711 10:08:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.711 10:08:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.711 10:08:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.711 10:08:34 -- paths/export.sh@5 -- # export PATH 00:08:14.711 10:08:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.711 10:08:34 -- nvmf/common.sh@46 -- # : 0 00:08:14.711 10:08:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:14.711 10:08:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:14.711 10:08:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:14.711 10:08:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.711 10:08:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.711 10:08:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:14.711 10:08:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:14.711 10:08:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:14.711 10:08:34 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:14.711 10:08:34 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:14.711 10:08:34 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:14.711 10:08:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:14.711 10:08:34 -- common/autotest_common.sh@10 -- # set +x 00:08:14.711 10:08:34 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:14.711 10:08:34 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:14.711 10:08:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:14.711 10:08:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.711 10:08:34 -- common/autotest_common.sh@10 -- # set +x 00:08:14.711 ************************************ 00:08:14.711 START TEST nvmf_example 00:08:14.711 ************************************ 00:08:14.711 10:08:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:14.711 * Looking for test storage... 00:08:14.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:14.711 10:08:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:14.711 10:08:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:14.711 10:08:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:14.711 10:08:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:14.711 10:08:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:14.711 10:08:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:14.711 10:08:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:14.711 10:08:34 -- scripts/common.sh@335 -- # IFS=.-: 00:08:14.711 10:08:34 -- scripts/common.sh@335 -- # read -ra ver1 00:08:14.711 10:08:34 -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.711 10:08:34 -- scripts/common.sh@336 -- # read -ra ver2 00:08:14.711 10:08:34 -- scripts/common.sh@337 -- # local 'op=<' 00:08:14.711 10:08:34 -- scripts/common.sh@339 -- # ver1_l=2 00:08:14.711 10:08:34 -- scripts/common.sh@340 -- # ver2_l=1 00:08:14.711 10:08:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:14.711 10:08:34 -- scripts/common.sh@343 -- # case "$op" in 00:08:14.711 10:08:34 -- scripts/common.sh@344 -- # : 1 00:08:14.711 10:08:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:14.711 10:08:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.711 10:08:34 -- scripts/common.sh@364 -- # decimal 1 00:08:14.711 10:08:34 -- scripts/common.sh@352 -- # local d=1 00:08:14.711 10:08:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.711 10:08:34 -- scripts/common.sh@354 -- # echo 1 00:08:14.711 10:08:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:14.711 10:08:34 -- scripts/common.sh@365 -- # decimal 2 00:08:14.711 10:08:34 -- scripts/common.sh@352 -- # local d=2 00:08:14.711 10:08:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.711 10:08:34 -- scripts/common.sh@354 -- # echo 2 00:08:14.711 10:08:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:14.711 10:08:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:14.711 10:08:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:14.711 10:08:34 -- scripts/common.sh@367 -- # return 0 00:08:14.711 10:08:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.711 10:08:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:14.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.711 --rc genhtml_branch_coverage=1 00:08:14.711 --rc genhtml_function_coverage=1 00:08:14.711 --rc genhtml_legend=1 00:08:14.711 --rc geninfo_all_blocks=1 00:08:14.711 --rc geninfo_unexecuted_blocks=1 00:08:14.711 00:08:14.711 ' 00:08:14.711 10:08:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:14.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.711 --rc genhtml_branch_coverage=1 00:08:14.711 --rc genhtml_function_coverage=1 00:08:14.711 --rc genhtml_legend=1 00:08:14.711 --rc geninfo_all_blocks=1 00:08:14.711 --rc geninfo_unexecuted_blocks=1 00:08:14.711 00:08:14.711 ' 00:08:14.711 10:08:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:14.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.711 --rc genhtml_branch_coverage=1 00:08:14.711 --rc genhtml_function_coverage=1 00:08:14.711 --rc genhtml_legend=1 00:08:14.711 --rc geninfo_all_blocks=1 00:08:14.711 --rc geninfo_unexecuted_blocks=1 00:08:14.711 00:08:14.711 ' 00:08:14.711 10:08:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:14.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.711 --rc genhtml_branch_coverage=1 00:08:14.711 --rc genhtml_function_coverage=1 00:08:14.711 --rc genhtml_legend=1 00:08:14.711 --rc geninfo_all_blocks=1 00:08:14.711 --rc geninfo_unexecuted_blocks=1 00:08:14.711 00:08:14.711 ' 00:08:14.711 10:08:34 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:14.711 10:08:34 -- nvmf/common.sh@7 -- # uname -s 00:08:14.711 10:08:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.711 10:08:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.711 10:08:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.711 10:08:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.711 10:08:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.711 10:08:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.711 10:08:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.711 10:08:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.711 10:08:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.711 10:08:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.711 10:08:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:08:14.711 10:08:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:08:14.711 10:08:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.712 10:08:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.712 10:08:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:14.712 10:08:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:14.712 10:08:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.712 10:08:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.712 10:08:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.712 10:08:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.712 10:08:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.712 10:08:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.712 10:08:34 -- paths/export.sh@5 -- # export PATH 00:08:14.712 10:08:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.712 10:08:34 -- nvmf/common.sh@46 -- # : 0 00:08:14.712 10:08:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:14.712 10:08:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:14.712 10:08:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:14.712 10:08:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.712 10:08:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.712 10:08:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:14.712 10:08:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:14.712 10:08:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:14.712 10:08:34 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:14.712 10:08:34 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:14.712 10:08:34 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:14.712 10:08:34 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:14.712 10:08:34 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:14.712 10:08:34 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:14.712 10:08:34 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:14.712 10:08:34 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:14.712 10:08:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:14.712 10:08:34 -- common/autotest_common.sh@10 -- # set +x 00:08:14.712 10:08:34 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:14.712 10:08:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:14.712 10:08:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.712 10:08:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:14.712 10:08:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:14.712 10:08:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:14.712 10:08:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.712 10:08:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.712 10:08:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.712 10:08:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:14.712 10:08:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:14.712 10:08:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:14.712 10:08:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:14.712 10:08:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:14.712 10:08:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:14.712 10:08:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.712 10:08:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.712 10:08:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:14.712 10:08:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:14.712 10:08:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:14.712 10:08:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:14.712 10:08:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:14.712 10:08:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.712 10:08:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:14.712 10:08:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:14.712 10:08:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:14.712 10:08:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:14.712 10:08:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:14.970 Cannot find device "nvmf_init_br" 00:08:14.970 10:08:34 -- nvmf/common.sh@153 -- # true 00:08:14.970 10:08:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:14.970 Cannot find device "nvmf_tgt_br" 00:08:14.970 10:08:34 -- nvmf/common.sh@154 -- # true 00:08:14.970 10:08:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:14.970 Cannot find device "nvmf_tgt_br2" 00:08:14.970 10:08:34 -- nvmf/common.sh@155 -- # true 00:08:14.970 10:08:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:14.970 Cannot find device "nvmf_init_br" 00:08:14.970 10:08:34 -- nvmf/common.sh@156 -- # true 00:08:14.970 10:08:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:14.970 Cannot find device "nvmf_tgt_br" 00:08:14.970 10:08:34 -- nvmf/common.sh@157 -- # true 00:08:14.970 10:08:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:14.970 Cannot find device "nvmf_tgt_br2" 00:08:14.970 10:08:34 -- nvmf/common.sh@158 -- # true 00:08:14.970 10:08:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:14.970 Cannot find device "nvmf_br" 00:08:14.970 10:08:34 -- nvmf/common.sh@159 -- # true 00:08:14.970 10:08:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:14.970 Cannot find device "nvmf_init_if" 00:08:14.970 10:08:34 -- nvmf/common.sh@160 -- # true 00:08:14.970 10:08:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:14.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.970 10:08:34 -- nvmf/common.sh@161 -- # true 00:08:14.970 10:08:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:14.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:14.970 10:08:34 -- nvmf/common.sh@162 -- # true 00:08:14.970 10:08:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:14.970 10:08:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:14.970 10:08:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:14.970 10:08:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:14.970 10:08:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:14.970 10:08:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:14.970 10:08:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:14.970 10:08:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:14.970 10:08:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:14.970 10:08:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:14.970 10:08:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:14.970 10:08:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:14.970 10:08:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:14.970 10:08:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:14.970 10:08:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:14.970 10:08:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:14.970 10:08:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:15.229 10:08:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:15.229 10:08:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:15.229 10:08:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:15.229 10:08:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:15.229 10:08:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:15.229 10:08:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:15.229 10:08:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:15.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:08:15.229 00:08:15.229 --- 10.0.0.2 ping statistics --- 00:08:15.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.229 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:08:15.229 10:08:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:15.229 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:15.229 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:08:15.229 00:08:15.229 --- 10.0.0.3 ping statistics --- 00:08:15.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.229 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:08:15.229 10:08:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:15.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:08:15.229 00:08:15.229 --- 10.0.0.1 ping statistics --- 00:08:15.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.229 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:15.229 10:08:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.229 10:08:34 -- nvmf/common.sh@421 -- # return 0 00:08:15.229 10:08:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:15.229 10:08:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.229 10:08:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:15.229 10:08:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:15.229 10:08:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.229 10:08:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:15.229 10:08:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:15.229 10:08:34 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:15.229 10:08:34 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:15.229 10:08:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:15.229 10:08:34 -- common/autotest_common.sh@10 -- # set +x 00:08:15.230 10:08:34 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:15.230 10:08:34 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:15.230 10:08:34 -- target/nvmf_example.sh@34 -- # nvmfpid=71788 00:08:15.230 10:08:34 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:15.230 10:08:34 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:15.230 10:08:34 -- target/nvmf_example.sh@36 -- # waitforlisten 71788 00:08:15.230 10:08:34 -- common/autotest_common.sh@829 -- # '[' -z 71788 ']' 00:08:15.230 10:08:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.230 10:08:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.230 10:08:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.230 10:08:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.230 10:08:34 -- common/autotest_common.sh@10 -- # set +x 00:08:15.488 10:08:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:15.488 10:08:35 -- common/autotest_common.sh@862 -- # return 0 00:08:15.488 10:08:35 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:15.488 10:08:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:15.488 10:08:35 -- common/autotest_common.sh@10 -- # set +x 00:08:15.747 10:08:35 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:15.747 10:08:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.747 10:08:35 -- common/autotest_common.sh@10 -- # set +x 00:08:15.747 10:08:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.747 10:08:35 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:15.747 10:08:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.747 10:08:35 -- common/autotest_common.sh@10 -- # set +x 00:08:15.747 10:08:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.747 10:08:35 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:15.747 10:08:35 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:15.747 10:08:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.747 10:08:35 -- common/autotest_common.sh@10 -- # set +x 00:08:15.747 10:08:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.747 10:08:35 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:15.747 10:08:35 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:15.747 10:08:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.747 10:08:35 -- common/autotest_common.sh@10 -- # set +x 00:08:15.747 10:08:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.747 10:08:35 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.747 10:08:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.747 10:08:35 -- common/autotest_common.sh@10 -- # set +x 00:08:15.747 10:08:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.747 10:08:35 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:08:15.747 10:08:35 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:27.949 Initializing NVMe Controllers 00:08:27.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:27.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:27.950 Initialization complete. Launching workers. 00:08:27.950 ======================================================== 00:08:27.950 Latency(us) 00:08:27.950 Device Information : IOPS MiB/s Average min max 00:08:27.950 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15130.11 59.10 4230.31 840.07 22121.90 00:08:27.950 ======================================================== 00:08:27.950 Total : 15130.11 59.10 4230.31 840.07 22121.90 00:08:27.950 00:08:27.950 10:08:45 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:27.950 10:08:45 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:27.950 10:08:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:27.950 10:08:45 -- nvmf/common.sh@116 -- # sync 00:08:27.950 10:08:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:27.950 10:08:45 -- nvmf/common.sh@119 -- # set +e 00:08:27.950 10:08:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:27.950 10:08:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:27.950 rmmod nvme_tcp 00:08:27.950 rmmod nvme_fabrics 00:08:27.950 rmmod nvme_keyring 00:08:27.950 10:08:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:27.950 10:08:45 -- nvmf/common.sh@123 -- # set -e 00:08:27.950 10:08:45 -- nvmf/common.sh@124 -- # return 0 00:08:27.950 10:08:45 -- nvmf/common.sh@477 -- # '[' -n 71788 ']' 00:08:27.950 10:08:45 -- nvmf/common.sh@478 -- # killprocess 71788 00:08:27.950 10:08:45 -- common/autotest_common.sh@936 -- # '[' -z 71788 ']' 00:08:27.950 10:08:45 -- common/autotest_common.sh@940 -- # kill -0 71788 00:08:27.950 10:08:45 -- common/autotest_common.sh@941 -- # uname 00:08:27.950 10:08:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:27.950 10:08:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71788 00:08:27.950 10:08:45 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:27.950 10:08:45 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:27.950 killing process with pid 71788 00:08:27.950 10:08:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71788' 00:08:27.950 10:08:45 -- common/autotest_common.sh@955 -- # kill 71788 00:08:27.950 10:08:45 -- common/autotest_common.sh@960 -- # wait 71788 00:08:27.950 nvmf threads initialize successfully 00:08:27.950 bdev subsystem init successfully 00:08:27.950 created a nvmf target service 00:08:27.950 create targets's poll groups done 00:08:27.950 all subsystems of target started 00:08:27.950 nvmf target is running 00:08:27.950 all subsystems of target stopped 00:08:27.950 destroy targets's poll groups done 00:08:27.950 destroyed the nvmf target service 00:08:27.950 bdev subsystem finish successfully 00:08:27.950 nvmf threads destroy successfully 00:08:27.950 10:08:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:27.950 10:08:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:27.950 10:08:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:27.950 10:08:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.950 10:08:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:27.950 10:08:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.950 10:08:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.950 10:08:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.950 10:08:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:27.950 10:08:45 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:27.950 10:08:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:27.950 10:08:45 -- common/autotest_common.sh@10 -- # set +x 00:08:27.950 00:08:27.950 real 0m11.631s 00:08:27.950 user 0m41.312s 00:08:27.950 sys 0m1.916s 00:08:27.950 10:08:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:27.950 10:08:45 -- common/autotest_common.sh@10 -- # set +x 00:08:27.950 ************************************ 00:08:27.950 END TEST nvmf_example 00:08:27.950 ************************************ 00:08:27.950 10:08:45 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:27.950 10:08:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:27.950 10:08:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.950 10:08:45 -- common/autotest_common.sh@10 -- # set +x 00:08:27.950 ************************************ 00:08:27.950 START TEST nvmf_filesystem 00:08:27.950 ************************************ 00:08:27.950 10:08:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:27.950 * Looking for test storage... 00:08:27.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.950 10:08:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:27.950 10:08:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:27.950 10:08:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:27.950 10:08:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:27.950 10:08:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:27.950 10:08:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:27.950 10:08:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:27.950 10:08:45 -- scripts/common.sh@335 -- # IFS=.-: 00:08:27.950 10:08:45 -- scripts/common.sh@335 -- # read -ra ver1 00:08:27.950 10:08:45 -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.950 10:08:45 -- scripts/common.sh@336 -- # read -ra ver2 00:08:27.950 10:08:45 -- scripts/common.sh@337 -- # local 'op=<' 00:08:27.950 10:08:45 -- scripts/common.sh@339 -- # ver1_l=2 00:08:27.950 10:08:45 -- scripts/common.sh@340 -- # ver2_l=1 00:08:27.950 10:08:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:27.950 10:08:45 -- scripts/common.sh@343 -- # case "$op" in 00:08:27.950 10:08:45 -- scripts/common.sh@344 -- # : 1 00:08:27.950 10:08:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:27.950 10:08:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.950 10:08:45 -- scripts/common.sh@364 -- # decimal 1 00:08:27.950 10:08:45 -- scripts/common.sh@352 -- # local d=1 00:08:27.950 10:08:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.950 10:08:45 -- scripts/common.sh@354 -- # echo 1 00:08:27.950 10:08:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:27.950 10:08:45 -- scripts/common.sh@365 -- # decimal 2 00:08:27.950 10:08:45 -- scripts/common.sh@352 -- # local d=2 00:08:27.950 10:08:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.950 10:08:45 -- scripts/common.sh@354 -- # echo 2 00:08:27.950 10:08:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:27.950 10:08:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:27.950 10:08:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:27.950 10:08:45 -- scripts/common.sh@367 -- # return 0 00:08:27.950 10:08:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.950 10:08:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:27.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.950 --rc genhtml_branch_coverage=1 00:08:27.950 --rc genhtml_function_coverage=1 00:08:27.950 --rc genhtml_legend=1 00:08:27.950 --rc geninfo_all_blocks=1 00:08:27.950 --rc geninfo_unexecuted_blocks=1 00:08:27.950 00:08:27.950 ' 00:08:27.950 10:08:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:27.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.950 --rc genhtml_branch_coverage=1 00:08:27.950 --rc genhtml_function_coverage=1 00:08:27.950 --rc genhtml_legend=1 00:08:27.950 --rc geninfo_all_blocks=1 00:08:27.950 --rc geninfo_unexecuted_blocks=1 00:08:27.950 00:08:27.950 ' 00:08:27.950 10:08:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:27.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.950 --rc genhtml_branch_coverage=1 00:08:27.950 --rc genhtml_function_coverage=1 00:08:27.950 --rc genhtml_legend=1 00:08:27.950 --rc geninfo_all_blocks=1 00:08:27.950 --rc geninfo_unexecuted_blocks=1 00:08:27.950 00:08:27.950 ' 00:08:27.950 10:08:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:27.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.950 --rc genhtml_branch_coverage=1 00:08:27.950 --rc genhtml_function_coverage=1 00:08:27.950 --rc genhtml_legend=1 00:08:27.950 --rc geninfo_all_blocks=1 00:08:27.950 --rc geninfo_unexecuted_blocks=1 00:08:27.950 00:08:27.950 ' 00:08:27.950 10:08:45 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:27.950 10:08:45 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:27.950 10:08:45 -- common/autotest_common.sh@34 -- # set -e 00:08:27.951 10:08:45 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:27.951 10:08:45 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:27.951 10:08:45 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:27.951 10:08:45 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:27.951 10:08:45 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:27.951 10:08:45 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:27.951 10:08:45 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:27.951 10:08:45 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:27.951 10:08:45 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:27.951 10:08:45 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:27.951 10:08:45 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:27.951 10:08:45 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:27.951 10:08:45 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:27.951 10:08:45 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:27.951 10:08:45 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:27.951 10:08:45 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:27.951 10:08:45 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:27.951 10:08:45 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:27.951 10:08:45 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:27.951 10:08:45 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:27.951 10:08:45 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:27.951 10:08:45 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:27.951 10:08:45 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:27.951 10:08:45 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:27.951 10:08:45 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:27.951 10:08:45 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:27.951 10:08:45 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:27.951 10:08:45 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:27.951 10:08:45 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:27.951 10:08:45 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:27.951 10:08:45 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:27.951 10:08:45 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:27.951 10:08:45 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:27.951 10:08:45 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:27.951 10:08:45 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:27.951 10:08:45 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:27.951 10:08:45 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:27.951 10:08:45 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:27.951 10:08:45 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:27.951 10:08:45 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:27.951 10:08:45 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:27.951 10:08:45 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:27.951 10:08:45 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:27.951 10:08:45 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:27.951 10:08:45 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:27.951 10:08:45 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:27.951 10:08:45 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:27.951 10:08:45 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:27.951 10:08:45 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:27.951 10:08:45 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:27.951 10:08:45 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:27.951 10:08:45 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:27.951 10:08:45 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:27.951 10:08:45 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:27.951 10:08:45 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:27.951 10:08:45 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:27.951 10:08:45 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:27.951 10:08:45 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:27.951 10:08:45 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:27.951 10:08:45 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:27.951 10:08:45 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:27.951 10:08:45 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:08:27.951 10:08:45 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:27.951 10:08:45 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:27.951 10:08:45 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:27.951 10:08:45 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:27.951 10:08:45 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:27.951 10:08:45 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:27.951 10:08:45 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:27.951 10:08:45 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:27.951 10:08:45 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:27.951 10:08:45 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:08:27.951 10:08:45 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:27.951 10:08:45 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:27.951 10:08:45 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:27.951 10:08:45 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:27.951 10:08:45 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:27.951 10:08:45 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:27.951 10:08:45 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:27.951 10:08:45 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:27.951 10:08:45 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:27.951 10:08:45 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:27.951 10:08:45 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:27.951 10:08:45 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:27.951 10:08:45 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:27.951 10:08:45 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:27.951 10:08:45 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:27.951 10:08:45 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:08:27.951 10:08:45 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:27.951 10:08:45 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:27.951 10:08:45 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:27.951 10:08:45 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:27.951 10:08:45 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:27.951 10:08:45 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:27.951 10:08:45 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:27.951 10:08:45 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:27.951 10:08:45 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:27.951 10:08:45 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:27.951 10:08:45 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:27.951 #define SPDK_CONFIG_H 00:08:27.951 #define SPDK_CONFIG_APPS 1 00:08:27.951 #define SPDK_CONFIG_ARCH native 00:08:27.951 #undef SPDK_CONFIG_ASAN 00:08:27.951 #define SPDK_CONFIG_AVAHI 1 00:08:27.951 #undef SPDK_CONFIG_CET 00:08:27.951 #define SPDK_CONFIG_COVERAGE 1 00:08:27.951 #define SPDK_CONFIG_CROSS_PREFIX 00:08:27.951 #undef SPDK_CONFIG_CRYPTO 00:08:27.951 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:27.951 #undef SPDK_CONFIG_CUSTOMOCF 00:08:27.951 #undef SPDK_CONFIG_DAOS 00:08:27.951 #define SPDK_CONFIG_DAOS_DIR 00:08:27.951 #define SPDK_CONFIG_DEBUG 1 00:08:27.951 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:27.951 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:08:27.951 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:08:27.951 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:08:27.951 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:27.951 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:27.951 #define SPDK_CONFIG_EXAMPLES 1 00:08:27.951 #undef SPDK_CONFIG_FC 00:08:27.951 #define SPDK_CONFIG_FC_PATH 00:08:27.951 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:27.951 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:27.951 #undef SPDK_CONFIG_FUSE 00:08:27.951 #undef SPDK_CONFIG_FUZZER 00:08:27.951 #define SPDK_CONFIG_FUZZER_LIB 00:08:27.952 #define SPDK_CONFIG_GOLANG 1 00:08:27.952 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:27.952 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:27.952 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:27.952 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:27.952 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:27.952 #define SPDK_CONFIG_IDXD 1 00:08:27.952 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:27.952 #undef SPDK_CONFIG_IPSEC_MB 00:08:27.952 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:27.952 #define SPDK_CONFIG_ISAL 1 00:08:27.952 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:27.952 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:27.952 #define SPDK_CONFIG_LIBDIR 00:08:27.952 #undef SPDK_CONFIG_LTO 00:08:27.952 #define SPDK_CONFIG_MAX_LCORES 00:08:27.952 #define SPDK_CONFIG_NVME_CUSE 1 00:08:27.952 #undef SPDK_CONFIG_OCF 00:08:27.952 #define SPDK_CONFIG_OCF_PATH 00:08:27.952 #define SPDK_CONFIG_OPENSSL_PATH 00:08:27.952 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:27.952 #undef SPDK_CONFIG_PGO_USE 00:08:27.952 #define SPDK_CONFIG_PREFIX /usr/local 00:08:27.952 #undef SPDK_CONFIG_RAID5F 00:08:27.952 #undef SPDK_CONFIG_RBD 00:08:27.952 #define SPDK_CONFIG_RDMA 1 00:08:27.952 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:27.952 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:27.952 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:27.952 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:27.952 #define SPDK_CONFIG_SHARED 1 00:08:27.952 #undef SPDK_CONFIG_SMA 00:08:27.952 #define SPDK_CONFIG_TESTS 1 00:08:27.952 #undef SPDK_CONFIG_TSAN 00:08:27.952 #define SPDK_CONFIG_UBLK 1 00:08:27.952 #define SPDK_CONFIG_UBSAN 1 00:08:27.952 #undef SPDK_CONFIG_UNIT_TESTS 00:08:27.952 #undef SPDK_CONFIG_URING 00:08:27.952 #define SPDK_CONFIG_URING_PATH 00:08:27.952 #undef SPDK_CONFIG_URING_ZNS 00:08:27.952 #define SPDK_CONFIG_USDT 1 00:08:27.952 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:27.952 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:27.952 #undef SPDK_CONFIG_VFIO_USER 00:08:27.952 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:27.952 #define SPDK_CONFIG_VHOST 1 00:08:27.952 #define SPDK_CONFIG_VIRTIO 1 00:08:27.952 #undef SPDK_CONFIG_VTUNE 00:08:27.952 #define SPDK_CONFIG_VTUNE_DIR 00:08:27.952 #define SPDK_CONFIG_WERROR 1 00:08:27.952 #define SPDK_CONFIG_WPDK_DIR 00:08:27.952 #undef SPDK_CONFIG_XNVME 00:08:27.952 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:27.952 10:08:45 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:27.952 10:08:45 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.952 10:08:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.952 10:08:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.952 10:08:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.952 10:08:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.952 10:08:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.952 10:08:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.952 10:08:45 -- paths/export.sh@5 -- # export PATH 00:08:27.952 10:08:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.952 10:08:45 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:27.952 10:08:45 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:27.952 10:08:45 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:27.952 10:08:45 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:27.952 10:08:45 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:27.952 10:08:45 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:27.952 10:08:45 -- pm/common@16 -- # TEST_TAG=N/A 00:08:27.952 10:08:45 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:27.952 10:08:45 -- common/autotest_common.sh@52 -- # : 1 00:08:27.952 10:08:45 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:27.952 10:08:45 -- common/autotest_common.sh@56 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:27.952 10:08:45 -- common/autotest_common.sh@58 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:27.952 10:08:45 -- common/autotest_common.sh@60 -- # : 1 00:08:27.952 10:08:45 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:27.952 10:08:45 -- common/autotest_common.sh@62 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:27.952 10:08:45 -- common/autotest_common.sh@64 -- # : 00:08:27.952 10:08:45 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:27.952 10:08:45 -- common/autotest_common.sh@66 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:27.952 10:08:45 -- common/autotest_common.sh@68 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:27.952 10:08:45 -- common/autotest_common.sh@70 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:27.952 10:08:45 -- common/autotest_common.sh@72 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:27.952 10:08:45 -- common/autotest_common.sh@74 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:27.952 10:08:45 -- common/autotest_common.sh@76 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:27.952 10:08:45 -- common/autotest_common.sh@78 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:27.952 10:08:45 -- common/autotest_common.sh@80 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:27.952 10:08:45 -- common/autotest_common.sh@82 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:27.952 10:08:45 -- common/autotest_common.sh@84 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:27.952 10:08:45 -- common/autotest_common.sh@86 -- # : 1 00:08:27.952 10:08:45 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:27.952 10:08:45 -- common/autotest_common.sh@88 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:27.952 10:08:45 -- common/autotest_common.sh@90 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:27.952 10:08:45 -- common/autotest_common.sh@92 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:27.952 10:08:45 -- common/autotest_common.sh@94 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:27.952 10:08:45 -- common/autotest_common.sh@96 -- # : tcp 00:08:27.952 10:08:45 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:27.952 10:08:45 -- common/autotest_common.sh@98 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:27.952 10:08:45 -- common/autotest_common.sh@100 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:27.952 10:08:45 -- common/autotest_common.sh@102 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:27.952 10:08:45 -- common/autotest_common.sh@104 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:27.952 10:08:45 -- common/autotest_common.sh@106 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:27.952 10:08:45 -- common/autotest_common.sh@108 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:27.952 10:08:45 -- common/autotest_common.sh@110 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:27.952 10:08:45 -- common/autotest_common.sh@112 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:27.952 10:08:45 -- common/autotest_common.sh@114 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:27.952 10:08:45 -- common/autotest_common.sh@116 -- # : 1 00:08:27.952 10:08:45 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:27.952 10:08:45 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:08:27.952 10:08:45 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:27.952 10:08:45 -- common/autotest_common.sh@120 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:27.952 10:08:45 -- common/autotest_common.sh@122 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:27.952 10:08:45 -- common/autotest_common.sh@124 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:27.952 10:08:45 -- common/autotest_common.sh@126 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:27.952 10:08:45 -- common/autotest_common.sh@128 -- # : 0 00:08:27.952 10:08:45 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:27.953 10:08:45 -- common/autotest_common.sh@130 -- # : 0 00:08:27.953 10:08:45 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:27.953 10:08:45 -- common/autotest_common.sh@132 -- # : v22.11.4 00:08:27.953 10:08:45 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:27.953 10:08:45 -- common/autotest_common.sh@134 -- # : true 00:08:27.953 10:08:45 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:27.953 10:08:45 -- common/autotest_common.sh@136 -- # : 0 00:08:27.953 10:08:45 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:27.953 10:08:45 -- common/autotest_common.sh@138 -- # : 0 00:08:27.953 10:08:45 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:27.953 10:08:45 -- common/autotest_common.sh@140 -- # : 1 00:08:27.953 10:08:45 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:27.953 10:08:45 -- common/autotest_common.sh@142 -- # : 0 00:08:27.953 10:08:45 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:27.953 10:08:45 -- common/autotest_common.sh@144 -- # : 0 00:08:27.953 10:08:45 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:27.953 10:08:45 -- common/autotest_common.sh@146 -- # : 0 00:08:27.953 10:08:45 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:27.953 10:08:45 -- common/autotest_common.sh@148 -- # : 00:08:27.953 10:08:45 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:27.953 10:08:45 -- common/autotest_common.sh@150 -- # : 0 00:08:27.953 10:08:45 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:27.953 10:08:45 -- common/autotest_common.sh@152 -- # : 0 00:08:27.953 10:08:45 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:27.953 10:08:45 -- common/autotest_common.sh@154 -- # : 0 00:08:27.953 10:08:45 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:27.953 10:08:45 -- common/autotest_common.sh@156 -- # : 0 00:08:27.953 10:08:45 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:27.953 10:08:45 -- common/autotest_common.sh@158 -- # : 0 00:08:27.953 10:08:45 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:27.953 10:08:45 -- common/autotest_common.sh@160 -- # : 0 00:08:27.953 10:08:45 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:27.953 10:08:45 -- common/autotest_common.sh@163 -- # : 00:08:27.953 10:08:45 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:27.953 10:08:45 -- common/autotest_common.sh@165 -- # : 1 00:08:27.953 10:08:45 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:27.953 10:08:45 -- common/autotest_common.sh@167 -- # : 1 00:08:27.953 10:08:45 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:27.953 10:08:45 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:27.953 10:08:45 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:27.953 10:08:45 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:27.953 10:08:45 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:27.953 10:08:45 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:27.953 10:08:45 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:27.953 10:08:45 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:27.953 10:08:45 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:27.953 10:08:45 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:27.953 10:08:45 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:27.953 10:08:45 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:27.953 10:08:45 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:27.953 10:08:45 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:27.953 10:08:45 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:27.953 10:08:45 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:27.953 10:08:45 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:27.953 10:08:45 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:27.953 10:08:45 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:27.953 10:08:45 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:27.953 10:08:45 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:27.953 10:08:45 -- common/autotest_common.sh@196 -- # cat 00:08:27.953 10:08:45 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:27.953 10:08:45 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:27.953 10:08:45 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:27.953 10:08:45 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:27.953 10:08:45 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:27.953 10:08:45 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:27.953 10:08:45 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:27.953 10:08:45 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:27.953 10:08:45 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:27.953 10:08:45 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:27.953 10:08:45 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:27.953 10:08:45 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:27.953 10:08:45 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:27.953 10:08:45 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:27.953 10:08:45 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:27.953 10:08:45 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:27.953 10:08:45 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:27.953 10:08:45 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:27.953 10:08:45 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:27.953 10:08:45 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:27.953 10:08:45 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:27.953 10:08:45 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:27.953 10:08:45 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:27.954 10:08:45 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:08:27.954 10:08:45 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:27.954 10:08:45 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:27.954 10:08:45 -- common/autotest_common.sh@255 -- # lcov_opt= 00:08:27.954 10:08:45 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:27.954 10:08:45 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:27.954 10:08:45 -- common/autotest_common.sh@259 -- # valgrind= 00:08:27.954 10:08:45 -- common/autotest_common.sh@265 -- # uname -s 00:08:27.954 10:08:45 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:27.954 10:08:45 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:27.954 10:08:45 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:27.954 10:08:45 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:27.954 10:08:45 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:27.954 10:08:45 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:27.954 10:08:45 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:27.954 10:08:45 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:08:27.954 10:08:45 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:27.954 10:08:45 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:27.954 10:08:45 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:27.954 10:08:45 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:27.954 10:08:45 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:27.954 10:08:45 -- common/autotest_common.sh@301 -- # for i in "$@" 00:08:27.954 10:08:45 -- common/autotest_common.sh@302 -- # case "$i" in 00:08:27.954 10:08:45 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=tcp 00:08:27.954 10:08:45 -- common/autotest_common.sh@319 -- # [[ -z 72015 ]] 00:08:27.954 10:08:45 -- common/autotest_common.sh@319 -- # kill -0 72015 00:08:27.954 10:08:45 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:27.954 10:08:45 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:27.954 10:08:45 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:27.954 10:08:45 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:27.954 10:08:45 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:27.954 10:08:45 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:27.954 10:08:45 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:27.954 10:08:45 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:27.954 10:08:45 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.eY98jw 00:08:27.954 10:08:45 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:27.954 10:08:45 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:27.954 10:08:45 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:27.954 10:08:45 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.eY98jw/tests/target /tmp/spdk.eY98jw 00:08:27.954 10:08:45 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:27.954 10:08:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.954 10:08:45 -- common/autotest_common.sh@328 -- # df -T 00:08:27.954 10:08:45 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=13431668736 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:08:27.954 10:08:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=6150209536 00:08:27.954 10:08:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=devtmpfs 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=4194304 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=4194304 00:08:27.954 10:08:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:27.954 10:08:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=6265163776 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266421248 00:08:27.954 10:08:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:08:27.954 10:08:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=2493755392 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=2506571776 00:08:27.954 10:08:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=12816384 00:08:27.954 10:08:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda5 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=btrfs 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=13431668736 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=20314062848 00:08:27.954 10:08:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=6150209536 00:08:27.954 10:08:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=6266281984 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6266421248 00:08:27.954 10:08:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=139264 00:08:27.954 10:08:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda2 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=840085504 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1012768768 00:08:27.954 10:08:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=103477248 00:08:27.954 10:08:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda3 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=91617280 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=104607744 00:08:27.954 10:08:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=12990464 00:08:27.954 10:08:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=1253269504 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1253281792 00:08:27.954 10:08:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:27.954 10:08:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:08:27.954 10:08:45 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # avails["$mount"]=95003074560 00:08:27.954 10:08:45 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:08:27.954 10:08:45 -- common/autotest_common.sh@364 -- # uses["$mount"]=4699705344 00:08:27.954 10:08:45 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:27.954 10:08:45 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:27.954 * Looking for test storage... 00:08:27.954 10:08:45 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:27.954 10:08:45 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:27.954 10:08:45 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:27.954 10:08:45 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.954 10:08:46 -- common/autotest_common.sh@373 -- # mount=/home 00:08:27.954 10:08:46 -- common/autotest_common.sh@375 -- # target_space=13431668736 00:08:27.954 10:08:46 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:27.954 10:08:46 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:27.954 10:08:46 -- common/autotest_common.sh@381 -- # [[ btrfs == tmpfs ]] 00:08:27.954 10:08:46 -- common/autotest_common.sh@381 -- # [[ btrfs == ramfs ]] 00:08:27.954 10:08:46 -- common/autotest_common.sh@381 -- # [[ /home == / ]] 00:08:27.954 10:08:46 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.954 10:08:46 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.954 10:08:46 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:27.954 10:08:46 -- common/autotest_common.sh@390 -- # return 0 00:08:27.954 10:08:46 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:27.954 10:08:46 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:27.954 10:08:46 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:27.954 10:08:46 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:27.954 10:08:46 -- common/autotest_common.sh@1682 -- # true 00:08:27.954 10:08:46 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:27.954 10:08:46 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:27.954 10:08:46 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:27.954 10:08:46 -- common/autotest_common.sh@27 -- # exec 00:08:27.954 10:08:46 -- common/autotest_common.sh@29 -- # exec 00:08:27.954 10:08:46 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:27.954 10:08:46 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:27.954 10:08:46 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:27.954 10:08:46 -- common/autotest_common.sh@18 -- # set -x 00:08:27.955 10:08:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:27.955 10:08:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:27.955 10:08:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:27.955 10:08:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:27.955 10:08:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:27.955 10:08:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:27.955 10:08:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:27.955 10:08:46 -- scripts/common.sh@335 -- # IFS=.-: 00:08:27.955 10:08:46 -- scripts/common.sh@335 -- # read -ra ver1 00:08:27.955 10:08:46 -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.955 10:08:46 -- scripts/common.sh@336 -- # read -ra ver2 00:08:27.955 10:08:46 -- scripts/common.sh@337 -- # local 'op=<' 00:08:27.955 10:08:46 -- scripts/common.sh@339 -- # ver1_l=2 00:08:27.955 10:08:46 -- scripts/common.sh@340 -- # ver2_l=1 00:08:27.955 10:08:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:27.955 10:08:46 -- scripts/common.sh@343 -- # case "$op" in 00:08:27.955 10:08:46 -- scripts/common.sh@344 -- # : 1 00:08:27.955 10:08:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:27.955 10:08:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.955 10:08:46 -- scripts/common.sh@364 -- # decimal 1 00:08:27.955 10:08:46 -- scripts/common.sh@352 -- # local d=1 00:08:27.955 10:08:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.955 10:08:46 -- scripts/common.sh@354 -- # echo 1 00:08:27.955 10:08:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:27.955 10:08:46 -- scripts/common.sh@365 -- # decimal 2 00:08:27.955 10:08:46 -- scripts/common.sh@352 -- # local d=2 00:08:27.955 10:08:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.955 10:08:46 -- scripts/common.sh@354 -- # echo 2 00:08:27.955 10:08:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:27.955 10:08:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:27.955 10:08:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:27.955 10:08:46 -- scripts/common.sh@367 -- # return 0 00:08:27.955 10:08:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.955 10:08:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:27.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.955 --rc genhtml_branch_coverage=1 00:08:27.955 --rc genhtml_function_coverage=1 00:08:27.955 --rc genhtml_legend=1 00:08:27.955 --rc geninfo_all_blocks=1 00:08:27.955 --rc geninfo_unexecuted_blocks=1 00:08:27.955 00:08:27.955 ' 00:08:27.955 10:08:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:27.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.955 --rc genhtml_branch_coverage=1 00:08:27.955 --rc genhtml_function_coverage=1 00:08:27.955 --rc genhtml_legend=1 00:08:27.955 --rc geninfo_all_blocks=1 00:08:27.955 --rc geninfo_unexecuted_blocks=1 00:08:27.955 00:08:27.955 ' 00:08:27.955 10:08:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:27.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.955 --rc genhtml_branch_coverage=1 00:08:27.955 --rc genhtml_function_coverage=1 00:08:27.955 --rc genhtml_legend=1 00:08:27.955 --rc geninfo_all_blocks=1 00:08:27.955 --rc geninfo_unexecuted_blocks=1 00:08:27.955 00:08:27.955 ' 00:08:27.955 10:08:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:27.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.955 --rc genhtml_branch_coverage=1 00:08:27.955 --rc genhtml_function_coverage=1 00:08:27.955 --rc genhtml_legend=1 00:08:27.955 --rc geninfo_all_blocks=1 00:08:27.955 --rc geninfo_unexecuted_blocks=1 00:08:27.955 00:08:27.955 ' 00:08:27.955 10:08:46 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:27.955 10:08:46 -- nvmf/common.sh@7 -- # uname -s 00:08:27.955 10:08:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.955 10:08:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.955 10:08:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.955 10:08:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.955 10:08:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.955 10:08:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.955 10:08:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.955 10:08:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.955 10:08:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.955 10:08:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.955 10:08:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:08:27.955 10:08:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:08:27.955 10:08:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.955 10:08:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.955 10:08:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:27.955 10:08:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.955 10:08:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.955 10:08:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.955 10:08:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.955 10:08:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.955 10:08:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.955 10:08:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.955 10:08:46 -- paths/export.sh@5 -- # export PATH 00:08:27.955 10:08:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.955 10:08:46 -- nvmf/common.sh@46 -- # : 0 00:08:27.955 10:08:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:27.955 10:08:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:27.955 10:08:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:27.955 10:08:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.955 10:08:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.955 10:08:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:27.955 10:08:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:27.956 10:08:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:27.956 10:08:46 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:27.956 10:08:46 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:27.956 10:08:46 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:27.956 10:08:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:27.956 10:08:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.956 10:08:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:27.956 10:08:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:27.956 10:08:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:27.956 10:08:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.956 10:08:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.956 10:08:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.956 10:08:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:27.956 10:08:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:27.956 10:08:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:27.956 10:08:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:27.956 10:08:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:27.956 10:08:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:27.956 10:08:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.956 10:08:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.956 10:08:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:27.956 10:08:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:27.956 10:08:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:27.956 10:08:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:27.956 10:08:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:27.956 10:08:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.956 10:08:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:27.956 10:08:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:27.956 10:08:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:27.956 10:08:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:27.956 10:08:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:27.956 10:08:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:27.956 Cannot find device "nvmf_tgt_br" 00:08:27.956 10:08:46 -- nvmf/common.sh@154 -- # true 00:08:27.956 10:08:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:27.956 Cannot find device "nvmf_tgt_br2" 00:08:27.956 10:08:46 -- nvmf/common.sh@155 -- # true 00:08:27.956 10:08:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:27.956 10:08:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:27.956 Cannot find device "nvmf_tgt_br" 00:08:27.956 10:08:46 -- nvmf/common.sh@157 -- # true 00:08:27.956 10:08:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:27.956 Cannot find device "nvmf_tgt_br2" 00:08:27.956 10:08:46 -- nvmf/common.sh@158 -- # true 00:08:27.956 10:08:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:27.956 10:08:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:27.956 10:08:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:27.956 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.956 10:08:46 -- nvmf/common.sh@161 -- # true 00:08:27.956 10:08:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:27.956 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:27.956 10:08:46 -- nvmf/common.sh@162 -- # true 00:08:27.956 10:08:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:27.956 10:08:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:27.956 10:08:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:27.956 10:08:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:27.956 10:08:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:27.956 10:08:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:27.956 10:08:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:27.956 10:08:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:27.956 10:08:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:27.956 10:08:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:27.956 10:08:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:27.956 10:08:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:27.956 10:08:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:27.956 10:08:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:27.956 10:08:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:27.956 10:08:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:27.956 10:08:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:27.956 10:08:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:27.956 10:08:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:27.956 10:08:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:27.956 10:08:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:27.956 10:08:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:27.956 10:08:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:27.956 10:08:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:27.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:08:27.956 00:08:27.956 --- 10.0.0.2 ping statistics --- 00:08:27.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.956 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:08:27.956 10:08:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:27.956 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:27.956 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:08:27.956 00:08:27.956 --- 10.0.0.3 ping statistics --- 00:08:27.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.956 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:27.956 10:08:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:27.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:08:27.956 00:08:27.956 --- 10.0.0.1 ping statistics --- 00:08:27.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.956 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:27.956 10:08:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.956 10:08:46 -- nvmf/common.sh@421 -- # return 0 00:08:27.956 10:08:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:27.956 10:08:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.956 10:08:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:27.956 10:08:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:27.956 10:08:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.956 10:08:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:27.956 10:08:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:27.956 10:08:46 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:27.956 10:08:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:27.956 10:08:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.956 10:08:46 -- common/autotest_common.sh@10 -- # set +x 00:08:27.956 ************************************ 00:08:27.956 START TEST nvmf_filesystem_no_in_capsule 00:08:27.956 ************************************ 00:08:27.956 10:08:46 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:08:27.956 10:08:46 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:27.956 10:08:46 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:27.956 10:08:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:27.956 10:08:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:27.956 10:08:46 -- common/autotest_common.sh@10 -- # set +x 00:08:27.956 10:08:46 -- nvmf/common.sh@469 -- # nvmfpid=72191 00:08:27.956 10:08:46 -- nvmf/common.sh@470 -- # waitforlisten 72191 00:08:27.956 10:08:46 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:27.956 10:08:46 -- common/autotest_common.sh@829 -- # '[' -z 72191 ']' 00:08:27.956 10:08:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.956 10:08:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:27.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.956 10:08:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.956 10:08:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:27.956 10:08:46 -- common/autotest_common.sh@10 -- # set +x 00:08:27.957 [2024-11-19 10:08:46.534993] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:27.957 [2024-11-19 10:08:46.535088] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.957 [2024-11-19 10:08:46.690777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:27.957 [2024-11-19 10:08:46.738513] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:27.957 [2024-11-19 10:08:46.738810] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.957 [2024-11-19 10:08:46.738849] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.957 [2024-11-19 10:08:46.738863] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.957 [2024-11-19 10:08:46.738967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.957 [2024-11-19 10:08:46.739013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:27.957 [2024-11-19 10:08:46.739111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:27.957 [2024-11-19 10:08:46.739124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.214 10:08:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:28.214 10:08:47 -- common/autotest_common.sh@862 -- # return 0 00:08:28.214 10:08:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:28.214 10:08:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:28.214 10:08:47 -- common/autotest_common.sh@10 -- # set +x 00:08:28.214 10:08:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.214 10:08:47 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:28.214 10:08:47 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:28.214 10:08:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.214 10:08:47 -- common/autotest_common.sh@10 -- # set +x 00:08:28.472 [2024-11-19 10:08:47.764547] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.472 10:08:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.472 10:08:47 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:28.472 10:08:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.472 10:08:47 -- common/autotest_common.sh@10 -- # set +x 00:08:28.472 Malloc1 00:08:28.472 10:08:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.472 10:08:47 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:28.472 10:08:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.472 10:08:47 -- common/autotest_common.sh@10 -- # set +x 00:08:28.472 10:08:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.472 10:08:47 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:28.472 10:08:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.472 10:08:47 -- common/autotest_common.sh@10 -- # set +x 00:08:28.472 10:08:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.472 10:08:47 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.472 10:08:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.472 10:08:47 -- common/autotest_common.sh@10 -- # set +x 00:08:28.472 [2024-11-19 10:08:47.887196] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.472 10:08:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.472 10:08:47 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:28.472 10:08:47 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:28.472 10:08:47 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:28.472 10:08:47 -- common/autotest_common.sh@1369 -- # local bs 00:08:28.472 10:08:47 -- common/autotest_common.sh@1370 -- # local nb 00:08:28.472 10:08:47 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:28.472 10:08:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.472 10:08:47 -- common/autotest_common.sh@10 -- # set +x 00:08:28.472 10:08:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.472 10:08:47 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:28.472 { 00:08:28.472 "aliases": [ 00:08:28.472 "77b77468-151e-4655-9db0-8b99b81a2654" 00:08:28.472 ], 00:08:28.472 "assigned_rate_limits": { 00:08:28.472 "r_mbytes_per_sec": 0, 00:08:28.472 "rw_ios_per_sec": 0, 00:08:28.472 "rw_mbytes_per_sec": 0, 00:08:28.472 "w_mbytes_per_sec": 0 00:08:28.472 }, 00:08:28.472 "block_size": 512, 00:08:28.472 "claim_type": "exclusive_write", 00:08:28.472 "claimed": true, 00:08:28.472 "driver_specific": {}, 00:08:28.472 "memory_domains": [ 00:08:28.472 { 00:08:28.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.472 "dma_device_type": 2 00:08:28.472 } 00:08:28.472 ], 00:08:28.472 "name": "Malloc1", 00:08:28.472 "num_blocks": 1048576, 00:08:28.472 "product_name": "Malloc disk", 00:08:28.472 "supported_io_types": { 00:08:28.472 "abort": true, 00:08:28.472 "compare": false, 00:08:28.472 "compare_and_write": false, 00:08:28.472 "flush": true, 00:08:28.472 "nvme_admin": false, 00:08:28.472 "nvme_io": false, 00:08:28.472 "read": true, 00:08:28.472 "reset": true, 00:08:28.472 "unmap": true, 00:08:28.472 "write": true, 00:08:28.472 "write_zeroes": true 00:08:28.472 }, 00:08:28.472 "uuid": "77b77468-151e-4655-9db0-8b99b81a2654", 00:08:28.472 "zoned": false 00:08:28.472 } 00:08:28.472 ]' 00:08:28.472 10:08:47 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:28.472 10:08:47 -- common/autotest_common.sh@1372 -- # bs=512 00:08:28.472 10:08:47 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:28.728 10:08:48 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:28.728 10:08:48 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:28.728 10:08:48 -- common/autotest_common.sh@1377 -- # echo 512 00:08:28.728 10:08:48 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:28.729 10:08:48 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:28.729 10:08:48 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:28.729 10:08:48 -- common/autotest_common.sh@1187 -- # local i=0 00:08:28.729 10:08:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:28.729 10:08:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:28.729 10:08:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:31.256 10:08:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:31.256 10:08:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:31.256 10:08:50 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:31.256 10:08:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:31.256 10:08:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:31.256 10:08:50 -- common/autotest_common.sh@1197 -- # return 0 00:08:31.256 10:08:50 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:31.256 10:08:50 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:31.256 10:08:50 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:31.256 10:08:50 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:31.256 10:08:50 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:31.256 10:08:50 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:31.256 10:08:50 -- setup/common.sh@80 -- # echo 536870912 00:08:31.256 10:08:50 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:31.256 10:08:50 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:31.256 10:08:50 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:31.256 10:08:50 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:31.256 10:08:50 -- target/filesystem.sh@69 -- # partprobe 00:08:31.256 10:08:50 -- target/filesystem.sh@70 -- # sleep 1 00:08:32.191 10:08:51 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:32.191 10:08:51 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:32.191 10:08:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:32.191 10:08:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.191 10:08:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.191 ************************************ 00:08:32.191 START TEST filesystem_ext4 00:08:32.191 ************************************ 00:08:32.191 10:08:51 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:32.191 10:08:51 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:32.191 10:08:51 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:32.191 10:08:51 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:32.191 10:08:51 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:32.191 10:08:51 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:32.191 10:08:51 -- common/autotest_common.sh@914 -- # local i=0 00:08:32.191 10:08:51 -- common/autotest_common.sh@915 -- # local force 00:08:32.191 10:08:51 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:32.191 10:08:51 -- common/autotest_common.sh@918 -- # force=-F 00:08:32.191 10:08:51 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:32.191 mke2fs 1.47.0 (5-Feb-2023) 00:08:32.191 Discarding device blocks: 0/522240 done 00:08:32.191 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:32.191 Filesystem UUID: f15ee584-2dfa-41ee-88e7-8d2afd8af850 00:08:32.191 Superblock backups stored on blocks: 00:08:32.191 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:32.191 00:08:32.191 Allocating group tables: 0/64 done 00:08:32.191 Writing inode tables: 0/64 done 00:08:32.191 Creating journal (8192 blocks): done 00:08:32.191 Writing superblocks and filesystem accounting information: 0/64 done 00:08:32.191 00:08:32.191 10:08:51 -- common/autotest_common.sh@931 -- # return 0 00:08:32.191 10:08:51 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:37.477 10:08:56 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:37.477 10:08:56 -- target/filesystem.sh@25 -- # sync 00:08:37.477 10:08:56 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:37.477 10:08:56 -- target/filesystem.sh@27 -- # sync 00:08:37.477 10:08:56 -- target/filesystem.sh@29 -- # i=0 00:08:37.477 10:08:56 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:37.477 10:08:56 -- target/filesystem.sh@37 -- # kill -0 72191 00:08:37.477 10:08:56 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:37.477 10:08:56 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:37.477 10:08:56 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:37.477 10:08:56 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:37.477 ************************************ 00:08:37.477 END TEST filesystem_ext4 00:08:37.477 ************************************ 00:08:37.477 00:08:37.477 real 0m5.511s 00:08:37.477 user 0m0.018s 00:08:37.477 sys 0m0.063s 00:08:37.478 10:08:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.478 10:08:56 -- common/autotest_common.sh@10 -- # set +x 00:08:37.478 10:08:56 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:37.478 10:08:56 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:37.478 10:08:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.478 10:08:56 -- common/autotest_common.sh@10 -- # set +x 00:08:37.478 ************************************ 00:08:37.478 START TEST filesystem_btrfs 00:08:37.478 ************************************ 00:08:37.478 10:08:56 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:37.478 10:08:56 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:37.478 10:08:56 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:37.478 10:08:56 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:37.478 10:08:56 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:37.478 10:08:56 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:37.478 10:08:56 -- common/autotest_common.sh@914 -- # local i=0 00:08:37.478 10:08:56 -- common/autotest_common.sh@915 -- # local force 00:08:37.478 10:08:56 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:37.478 10:08:56 -- common/autotest_common.sh@920 -- # force=-f 00:08:37.478 10:08:56 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:37.735 btrfs-progs v6.8.1 00:08:37.735 See https://btrfs.readthedocs.io for more information. 00:08:37.735 00:08:37.735 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:37.735 NOTE: several default settings have changed in version 5.15, please make sure 00:08:37.735 this does not affect your deployments: 00:08:37.735 - DUP for metadata (-m dup) 00:08:37.735 - enabled no-holes (-O no-holes) 00:08:37.735 - enabled free-space-tree (-R free-space-tree) 00:08:37.735 00:08:37.735 Label: (null) 00:08:37.735 UUID: e098478b-a262-4695-a1ab-9cb67f80001b 00:08:37.735 Node size: 16384 00:08:37.735 Sector size: 4096 (CPU page size: 4096) 00:08:37.735 Filesystem size: 510.00MiB 00:08:37.735 Block group profiles: 00:08:37.735 Data: single 8.00MiB 00:08:37.735 Metadata: DUP 32.00MiB 00:08:37.735 System: DUP 8.00MiB 00:08:37.735 SSD detected: yes 00:08:37.735 Zoned device: no 00:08:37.735 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:37.735 Checksum: crc32c 00:08:37.735 Number of devices: 1 00:08:37.735 Devices: 00:08:37.735 ID SIZE PATH 00:08:37.735 1 510.00MiB /dev/nvme0n1p1 00:08:37.735 00:08:37.735 10:08:57 -- common/autotest_common.sh@931 -- # return 0 00:08:37.735 10:08:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:37.735 10:08:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:37.735 10:08:57 -- target/filesystem.sh@25 -- # sync 00:08:37.735 10:08:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:37.735 10:08:57 -- target/filesystem.sh@27 -- # sync 00:08:37.735 10:08:57 -- target/filesystem.sh@29 -- # i=0 00:08:37.735 10:08:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:37.735 10:08:57 -- target/filesystem.sh@37 -- # kill -0 72191 00:08:37.735 10:08:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:37.735 10:08:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:37.735 10:08:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:37.735 10:08:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:37.735 ************************************ 00:08:37.735 END TEST filesystem_btrfs 00:08:37.735 ************************************ 00:08:37.735 00:08:37.735 real 0m0.177s 00:08:37.735 user 0m0.023s 00:08:37.735 sys 0m0.059s 00:08:37.735 10:08:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.735 10:08:57 -- common/autotest_common.sh@10 -- # set +x 00:08:37.735 10:08:57 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:37.735 10:08:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:37.735 10:08:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.735 10:08:57 -- common/autotest_common.sh@10 -- # set +x 00:08:37.735 ************************************ 00:08:37.735 START TEST filesystem_xfs 00:08:37.735 ************************************ 00:08:37.735 10:08:57 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:37.735 10:08:57 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:37.735 10:08:57 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:37.735 10:08:57 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:37.735 10:08:57 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:37.735 10:08:57 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:37.736 10:08:57 -- common/autotest_common.sh@914 -- # local i=0 00:08:37.736 10:08:57 -- common/autotest_common.sh@915 -- # local force 00:08:37.736 10:08:57 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:37.736 10:08:57 -- common/autotest_common.sh@920 -- # force=-f 00:08:37.736 10:08:57 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:37.736 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:37.736 = sectsz=512 attr=2, projid32bit=1 00:08:37.736 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:37.736 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:37.736 data = bsize=4096 blocks=130560, imaxpct=25 00:08:37.736 = sunit=0 swidth=0 blks 00:08:37.736 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:37.736 log =internal log bsize=4096 blocks=16384, version=2 00:08:37.736 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:37.736 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:38.669 Discarding blocks...Done. 00:08:38.669 10:08:57 -- common/autotest_common.sh@931 -- # return 0 00:08:38.669 10:08:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:41.235 10:09:00 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:41.235 10:09:00 -- target/filesystem.sh@25 -- # sync 00:08:41.235 10:09:00 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:41.235 10:09:00 -- target/filesystem.sh@27 -- # sync 00:08:41.235 10:09:00 -- target/filesystem.sh@29 -- # i=0 00:08:41.235 10:09:00 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:41.235 10:09:00 -- target/filesystem.sh@37 -- # kill -0 72191 00:08:41.235 10:09:00 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:41.235 10:09:00 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:41.235 10:09:00 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:41.235 10:09:00 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:41.235 ************************************ 00:08:41.235 END TEST filesystem_xfs 00:08:41.235 ************************************ 00:08:41.235 00:08:41.235 real 0m3.147s 00:08:41.235 user 0m0.021s 00:08:41.235 sys 0m0.056s 00:08:41.235 10:09:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:41.235 10:09:00 -- common/autotest_common.sh@10 -- # set +x 00:08:41.235 10:09:00 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:41.235 10:09:00 -- target/filesystem.sh@93 -- # sync 00:08:41.235 10:09:00 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:41.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.235 10:09:00 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:41.235 10:09:00 -- common/autotest_common.sh@1208 -- # local i=0 00:08:41.235 10:09:00 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:41.235 10:09:00 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.235 10:09:00 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:41.235 10:09:00 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.235 10:09:00 -- common/autotest_common.sh@1220 -- # return 0 00:08:41.235 10:09:00 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:41.235 10:09:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.235 10:09:00 -- common/autotest_common.sh@10 -- # set +x 00:08:41.235 10:09:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.235 10:09:00 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:41.235 10:09:00 -- target/filesystem.sh@101 -- # killprocess 72191 00:08:41.235 10:09:00 -- common/autotest_common.sh@936 -- # '[' -z 72191 ']' 00:08:41.235 10:09:00 -- common/autotest_common.sh@940 -- # kill -0 72191 00:08:41.235 10:09:00 -- common/autotest_common.sh@941 -- # uname 00:08:41.235 10:09:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:41.235 10:09:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72191 00:08:41.235 killing process with pid 72191 00:08:41.235 10:09:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:41.235 10:09:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:41.235 10:09:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72191' 00:08:41.235 10:09:00 -- common/autotest_common.sh@955 -- # kill 72191 00:08:41.235 10:09:00 -- common/autotest_common.sh@960 -- # wait 72191 00:08:41.235 10:09:00 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:41.235 00:08:41.235 real 0m14.281s 00:08:41.235 user 0m55.024s 00:08:41.235 sys 0m1.899s 00:08:41.235 10:09:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:41.235 10:09:00 -- common/autotest_common.sh@10 -- # set +x 00:08:41.235 ************************************ 00:08:41.235 END TEST nvmf_filesystem_no_in_capsule 00:08:41.235 ************************************ 00:08:41.493 10:09:00 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:41.493 10:09:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:41.493 10:09:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:41.493 10:09:00 -- common/autotest_common.sh@10 -- # set +x 00:08:41.493 ************************************ 00:08:41.493 START TEST nvmf_filesystem_in_capsule 00:08:41.493 ************************************ 00:08:41.493 10:09:00 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:08:41.493 10:09:00 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:41.493 10:09:00 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:41.493 10:09:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:41.493 10:09:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:41.493 10:09:00 -- common/autotest_common.sh@10 -- # set +x 00:08:41.493 10:09:00 -- nvmf/common.sh@469 -- # nvmfpid=72562 00:08:41.493 10:09:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:41.493 10:09:00 -- nvmf/common.sh@470 -- # waitforlisten 72562 00:08:41.494 10:09:00 -- common/autotest_common.sh@829 -- # '[' -z 72562 ']' 00:08:41.494 10:09:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.494 10:09:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.494 10:09:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.494 10:09:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.494 10:09:00 -- common/autotest_common.sh@10 -- # set +x 00:08:41.494 [2024-11-19 10:09:00.863150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:41.494 [2024-11-19 10:09:00.863267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.494 [2024-11-19 10:09:01.006022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.752 [2024-11-19 10:09:01.045465] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:41.752 [2024-11-19 10:09:01.045631] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.752 [2024-11-19 10:09:01.045646] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.752 [2024-11-19 10:09:01.045657] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.752 [2024-11-19 10:09:01.045987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.752 [2024-11-19 10:09:01.046053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.752 [2024-11-19 10:09:01.046112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.752 [2024-11-19 10:09:01.046115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.686 10:09:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.686 10:09:01 -- common/autotest_common.sh@862 -- # return 0 00:08:42.686 10:09:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:42.686 10:09:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:42.686 10:09:01 -- common/autotest_common.sh@10 -- # set +x 00:08:42.686 10:09:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.686 10:09:01 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:42.686 10:09:01 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:42.686 10:09:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.686 10:09:01 -- common/autotest_common.sh@10 -- # set +x 00:08:42.686 [2024-11-19 10:09:01.960615] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.686 10:09:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.686 10:09:01 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:42.686 10:09:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.686 10:09:01 -- common/autotest_common.sh@10 -- # set +x 00:08:42.686 Malloc1 00:08:42.686 10:09:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.686 10:09:02 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:42.686 10:09:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.686 10:09:02 -- common/autotest_common.sh@10 -- # set +x 00:08:42.686 10:09:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.686 10:09:02 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:42.686 10:09:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.686 10:09:02 -- common/autotest_common.sh@10 -- # set +x 00:08:42.686 10:09:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.686 10:09:02 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.686 10:09:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.687 10:09:02 -- common/autotest_common.sh@10 -- # set +x 00:08:42.687 [2024-11-19 10:09:02.076941] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.687 10:09:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.687 10:09:02 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:42.687 10:09:02 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:42.687 10:09:02 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:42.687 10:09:02 -- common/autotest_common.sh@1369 -- # local bs 00:08:42.687 10:09:02 -- common/autotest_common.sh@1370 -- # local nb 00:08:42.687 10:09:02 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:42.687 10:09:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.687 10:09:02 -- common/autotest_common.sh@10 -- # set +x 00:08:42.687 10:09:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.687 10:09:02 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:42.687 { 00:08:42.687 "aliases": [ 00:08:42.687 "e7960f41-821d-423f-9a65-d45fd5a9a1b4" 00:08:42.687 ], 00:08:42.687 "assigned_rate_limits": { 00:08:42.687 "r_mbytes_per_sec": 0, 00:08:42.687 "rw_ios_per_sec": 0, 00:08:42.687 "rw_mbytes_per_sec": 0, 00:08:42.687 "w_mbytes_per_sec": 0 00:08:42.687 }, 00:08:42.687 "block_size": 512, 00:08:42.687 "claim_type": "exclusive_write", 00:08:42.687 "claimed": true, 00:08:42.687 "driver_specific": {}, 00:08:42.687 "memory_domains": [ 00:08:42.687 { 00:08:42.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.687 "dma_device_type": 2 00:08:42.687 } 00:08:42.687 ], 00:08:42.687 "name": "Malloc1", 00:08:42.687 "num_blocks": 1048576, 00:08:42.687 "product_name": "Malloc disk", 00:08:42.687 "supported_io_types": { 00:08:42.687 "abort": true, 00:08:42.687 "compare": false, 00:08:42.687 "compare_and_write": false, 00:08:42.687 "flush": true, 00:08:42.687 "nvme_admin": false, 00:08:42.687 "nvme_io": false, 00:08:42.687 "read": true, 00:08:42.687 "reset": true, 00:08:42.687 "unmap": true, 00:08:42.687 "write": true, 00:08:42.687 "write_zeroes": true 00:08:42.687 }, 00:08:42.687 "uuid": "e7960f41-821d-423f-9a65-d45fd5a9a1b4", 00:08:42.687 "zoned": false 00:08:42.687 } 00:08:42.687 ]' 00:08:42.687 10:09:02 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:42.687 10:09:02 -- common/autotest_common.sh@1372 -- # bs=512 00:08:42.687 10:09:02 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:42.687 10:09:02 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:42.687 10:09:02 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:42.687 10:09:02 -- common/autotest_common.sh@1377 -- # echo 512 00:08:42.687 10:09:02 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:42.687 10:09:02 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:42.944 10:09:02 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:42.944 10:09:02 -- common/autotest_common.sh@1187 -- # local i=0 00:08:42.945 10:09:02 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:42.945 10:09:02 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:42.945 10:09:02 -- common/autotest_common.sh@1194 -- # sleep 2 00:08:44.844 10:09:04 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:08:44.844 10:09:04 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:08:44.844 10:09:04 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:08:45.103 10:09:04 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:08:45.103 10:09:04 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:08:45.103 10:09:04 -- common/autotest_common.sh@1197 -- # return 0 00:08:45.103 10:09:04 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:45.103 10:09:04 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:45.103 10:09:04 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:45.103 10:09:04 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:45.103 10:09:04 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:45.103 10:09:04 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:45.103 10:09:04 -- setup/common.sh@80 -- # echo 536870912 00:08:45.103 10:09:04 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:45.103 10:09:04 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:45.103 10:09:04 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:45.103 10:09:04 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:45.103 10:09:04 -- target/filesystem.sh@69 -- # partprobe 00:08:45.103 10:09:04 -- target/filesystem.sh@70 -- # sleep 1 00:08:46.037 10:09:05 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:46.037 10:09:05 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:46.037 10:09:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:46.038 10:09:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.038 10:09:05 -- common/autotest_common.sh@10 -- # set +x 00:08:46.038 ************************************ 00:08:46.038 START TEST filesystem_in_capsule_ext4 00:08:46.038 ************************************ 00:08:46.038 10:09:05 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:46.038 10:09:05 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:46.038 10:09:05 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:46.038 10:09:05 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:46.038 10:09:05 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:08:46.038 10:09:05 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:46.038 10:09:05 -- common/autotest_common.sh@914 -- # local i=0 00:08:46.038 10:09:05 -- common/autotest_common.sh@915 -- # local force 00:08:46.038 10:09:05 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:08:46.038 10:09:05 -- common/autotest_common.sh@918 -- # force=-F 00:08:46.038 10:09:05 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:46.038 mke2fs 1.47.0 (5-Feb-2023) 00:08:46.295 Discarding device blocks: 0/522240 done 00:08:46.295 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:46.295 Filesystem UUID: 122b220a-c49b-4a6a-b121-d3041cd5ccee 00:08:46.295 Superblock backups stored on blocks: 00:08:46.295 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:46.295 00:08:46.295 Allocating group tables: 0/64 done 00:08:46.295 Writing inode tables: 0/64 done 00:08:46.295 Creating journal (8192 blocks): done 00:08:46.295 Writing superblocks and filesystem accounting information: 0/64 done 00:08:46.295 00:08:46.295 10:09:05 -- common/autotest_common.sh@931 -- # return 0 00:08:46.295 10:09:05 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:51.564 10:09:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:51.564 10:09:11 -- target/filesystem.sh@25 -- # sync 00:08:51.564 10:09:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:51.564 10:09:11 -- target/filesystem.sh@27 -- # sync 00:08:51.564 10:09:11 -- target/filesystem.sh@29 -- # i=0 00:08:51.564 10:09:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:51.564 10:09:11 -- target/filesystem.sh@37 -- # kill -0 72562 00:08:51.564 10:09:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:51.564 10:09:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:51.823 10:09:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:51.823 10:09:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:51.823 ************************************ 00:08:51.823 END TEST filesystem_in_capsule_ext4 00:08:51.823 ************************************ 00:08:51.823 00:08:51.823 real 0m5.588s 00:08:51.823 user 0m0.029s 00:08:51.823 sys 0m0.063s 00:08:51.823 10:09:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:51.823 10:09:11 -- common/autotest_common.sh@10 -- # set +x 00:08:51.823 10:09:11 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:51.823 10:09:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:51.823 10:09:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.823 10:09:11 -- common/autotest_common.sh@10 -- # set +x 00:08:51.823 ************************************ 00:08:51.823 START TEST filesystem_in_capsule_btrfs 00:08:51.823 ************************************ 00:08:51.823 10:09:11 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:51.823 10:09:11 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:51.823 10:09:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:51.823 10:09:11 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:51.823 10:09:11 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:08:51.823 10:09:11 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:51.823 10:09:11 -- common/autotest_common.sh@914 -- # local i=0 00:08:51.823 10:09:11 -- common/autotest_common.sh@915 -- # local force 00:08:51.823 10:09:11 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:08:51.823 10:09:11 -- common/autotest_common.sh@920 -- # force=-f 00:08:51.823 10:09:11 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:51.823 btrfs-progs v6.8.1 00:08:51.823 See https://btrfs.readthedocs.io for more information. 00:08:51.823 00:08:51.823 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:51.823 NOTE: several default settings have changed in version 5.15, please make sure 00:08:51.823 this does not affect your deployments: 00:08:51.823 - DUP for metadata (-m dup) 00:08:51.823 - enabled no-holes (-O no-holes) 00:08:51.823 - enabled free-space-tree (-R free-space-tree) 00:08:51.823 00:08:51.823 Label: (null) 00:08:51.823 UUID: c5cdfb32-c5ad-4e18-beac-e2fb05869969 00:08:51.823 Node size: 16384 00:08:51.823 Sector size: 4096 (CPU page size: 4096) 00:08:51.823 Filesystem size: 510.00MiB 00:08:51.823 Block group profiles: 00:08:51.823 Data: single 8.00MiB 00:08:51.823 Metadata: DUP 32.00MiB 00:08:51.823 System: DUP 8.00MiB 00:08:51.823 SSD detected: yes 00:08:51.823 Zoned device: no 00:08:51.823 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:51.823 Checksum: crc32c 00:08:51.823 Number of devices: 1 00:08:51.823 Devices: 00:08:51.823 ID SIZE PATH 00:08:51.823 1 510.00MiB /dev/nvme0n1p1 00:08:51.823 00:08:51.823 10:09:11 -- common/autotest_common.sh@931 -- # return 0 00:08:51.823 10:09:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:51.823 10:09:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:51.823 10:09:11 -- target/filesystem.sh@25 -- # sync 00:08:51.823 10:09:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:51.823 10:09:11 -- target/filesystem.sh@27 -- # sync 00:08:51.823 10:09:11 -- target/filesystem.sh@29 -- # i=0 00:08:51.823 10:09:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:51.823 10:09:11 -- target/filesystem.sh@37 -- # kill -0 72562 00:08:51.823 10:09:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:51.823 10:09:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:51.823 10:09:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:51.823 10:09:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:51.823 00:08:51.823 real 0m0.175s 00:08:51.823 user 0m0.021s 00:08:51.823 sys 0m0.056s 00:08:51.823 10:09:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:51.823 10:09:11 -- common/autotest_common.sh@10 -- # set +x 00:08:51.823 ************************************ 00:08:51.823 END TEST filesystem_in_capsule_btrfs 00:08:51.823 ************************************ 00:08:52.082 10:09:11 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:52.082 10:09:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:52.082 10:09:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:52.082 10:09:11 -- common/autotest_common.sh@10 -- # set +x 00:08:52.082 ************************************ 00:08:52.082 START TEST filesystem_in_capsule_xfs 00:08:52.082 ************************************ 00:08:52.082 10:09:11 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:08:52.082 10:09:11 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:52.082 10:09:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:52.082 10:09:11 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:52.082 10:09:11 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:08:52.082 10:09:11 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:08:52.082 10:09:11 -- common/autotest_common.sh@914 -- # local i=0 00:08:52.082 10:09:11 -- common/autotest_common.sh@915 -- # local force 00:08:52.082 10:09:11 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:08:52.082 10:09:11 -- common/autotest_common.sh@920 -- # force=-f 00:08:52.082 10:09:11 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:52.082 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:52.082 = sectsz=512 attr=2, projid32bit=1 00:08:52.082 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:52.082 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:52.082 data = bsize=4096 blocks=130560, imaxpct=25 00:08:52.082 = sunit=0 swidth=0 blks 00:08:52.082 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:52.082 log =internal log bsize=4096 blocks=16384, version=2 00:08:52.082 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:52.082 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:52.658 Discarding blocks...Done. 00:08:52.658 10:09:12 -- common/autotest_common.sh@931 -- # return 0 00:08:52.658 10:09:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:54.561 10:09:13 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:54.561 10:09:13 -- target/filesystem.sh@25 -- # sync 00:08:54.561 10:09:13 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:54.561 10:09:13 -- target/filesystem.sh@27 -- # sync 00:08:54.561 10:09:13 -- target/filesystem.sh@29 -- # i=0 00:08:54.561 10:09:13 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:54.561 10:09:13 -- target/filesystem.sh@37 -- # kill -0 72562 00:08:54.561 10:09:13 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:54.561 10:09:13 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:54.561 10:09:13 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:54.561 10:09:13 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:54.561 00:08:54.561 real 0m2.578s 00:08:54.561 user 0m0.019s 00:08:54.561 sys 0m0.054s 00:08:54.561 10:09:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:54.561 10:09:13 -- common/autotest_common.sh@10 -- # set +x 00:08:54.561 ************************************ 00:08:54.561 END TEST filesystem_in_capsule_xfs 00:08:54.561 ************************************ 00:08:54.561 10:09:13 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:54.561 10:09:14 -- target/filesystem.sh@93 -- # sync 00:08:54.561 10:09:14 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:54.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.561 10:09:14 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:54.561 10:09:14 -- common/autotest_common.sh@1208 -- # local i=0 00:08:54.561 10:09:14 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:08:54.561 10:09:14 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.561 10:09:14 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:08:54.561 10:09:14 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:54.561 10:09:14 -- common/autotest_common.sh@1220 -- # return 0 00:08:54.561 10:09:14 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.561 10:09:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.561 10:09:14 -- common/autotest_common.sh@10 -- # set +x 00:08:54.561 10:09:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.561 10:09:14 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:54.561 10:09:14 -- target/filesystem.sh@101 -- # killprocess 72562 00:08:54.561 10:09:14 -- common/autotest_common.sh@936 -- # '[' -z 72562 ']' 00:08:54.561 10:09:14 -- common/autotest_common.sh@940 -- # kill -0 72562 00:08:54.561 10:09:14 -- common/autotest_common.sh@941 -- # uname 00:08:54.561 10:09:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:54.561 10:09:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72562 00:08:54.819 10:09:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:54.819 10:09:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:54.819 killing process with pid 72562 00:08:54.819 10:09:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72562' 00:08:54.819 10:09:14 -- common/autotest_common.sh@955 -- # kill 72562 00:08:54.819 10:09:14 -- common/autotest_common.sh@960 -- # wait 72562 00:08:55.078 10:09:14 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:55.078 00:08:55.078 real 0m13.583s 00:08:55.078 user 0m52.197s 00:08:55.078 sys 0m1.885s 00:08:55.078 10:09:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.078 10:09:14 -- common/autotest_common.sh@10 -- # set +x 00:08:55.078 ************************************ 00:08:55.078 END TEST nvmf_filesystem_in_capsule 00:08:55.078 ************************************ 00:08:55.078 10:09:14 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:55.078 10:09:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:55.078 10:09:14 -- nvmf/common.sh@116 -- # sync 00:08:55.078 10:09:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:55.078 10:09:14 -- nvmf/common.sh@119 -- # set +e 00:08:55.078 10:09:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:55.078 10:09:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:55.078 rmmod nvme_tcp 00:08:55.078 rmmod nvme_fabrics 00:08:55.078 rmmod nvme_keyring 00:08:55.078 10:09:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:55.078 10:09:14 -- nvmf/common.sh@123 -- # set -e 00:08:55.078 10:09:14 -- nvmf/common.sh@124 -- # return 0 00:08:55.078 10:09:14 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:55.078 10:09:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:55.078 10:09:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:55.078 10:09:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:55.078 10:09:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:55.078 10:09:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:55.078 10:09:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.078 10:09:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.078 10:09:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.078 10:09:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:55.078 00:08:55.078 real 0m28.819s 00:08:55.078 user 1m47.610s 00:08:55.078 sys 0m4.180s 00:08:55.078 10:09:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:55.078 10:09:14 -- common/autotest_common.sh@10 -- # set +x 00:08:55.078 ************************************ 00:08:55.078 END TEST nvmf_filesystem 00:08:55.078 ************************************ 00:08:55.078 10:09:14 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:55.078 10:09:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:55.078 10:09:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.078 10:09:14 -- common/autotest_common.sh@10 -- # set +x 00:08:55.078 ************************************ 00:08:55.078 START TEST nvmf_discovery 00:08:55.078 ************************************ 00:08:55.078 10:09:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:55.337 * Looking for test storage... 00:08:55.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:55.337 10:09:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:55.337 10:09:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:55.337 10:09:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:55.337 10:09:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:55.337 10:09:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:55.337 10:09:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:55.337 10:09:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:55.337 10:09:14 -- scripts/common.sh@335 -- # IFS=.-: 00:08:55.337 10:09:14 -- scripts/common.sh@335 -- # read -ra ver1 00:08:55.337 10:09:14 -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.337 10:09:14 -- scripts/common.sh@336 -- # read -ra ver2 00:08:55.337 10:09:14 -- scripts/common.sh@337 -- # local 'op=<' 00:08:55.337 10:09:14 -- scripts/common.sh@339 -- # ver1_l=2 00:08:55.337 10:09:14 -- scripts/common.sh@340 -- # ver2_l=1 00:08:55.337 10:09:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:55.337 10:09:14 -- scripts/common.sh@343 -- # case "$op" in 00:08:55.337 10:09:14 -- scripts/common.sh@344 -- # : 1 00:08:55.337 10:09:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:55.337 10:09:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.337 10:09:14 -- scripts/common.sh@364 -- # decimal 1 00:08:55.337 10:09:14 -- scripts/common.sh@352 -- # local d=1 00:08:55.337 10:09:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.337 10:09:14 -- scripts/common.sh@354 -- # echo 1 00:08:55.338 10:09:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:55.338 10:09:14 -- scripts/common.sh@365 -- # decimal 2 00:08:55.338 10:09:14 -- scripts/common.sh@352 -- # local d=2 00:08:55.338 10:09:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.338 10:09:14 -- scripts/common.sh@354 -- # echo 2 00:08:55.338 10:09:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:55.338 10:09:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:55.338 10:09:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:55.338 10:09:14 -- scripts/common.sh@367 -- # return 0 00:08:55.338 10:09:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.338 10:09:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:55.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.338 --rc genhtml_branch_coverage=1 00:08:55.338 --rc genhtml_function_coverage=1 00:08:55.338 --rc genhtml_legend=1 00:08:55.338 --rc geninfo_all_blocks=1 00:08:55.338 --rc geninfo_unexecuted_blocks=1 00:08:55.338 00:08:55.338 ' 00:08:55.338 10:09:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:55.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.338 --rc genhtml_branch_coverage=1 00:08:55.338 --rc genhtml_function_coverage=1 00:08:55.338 --rc genhtml_legend=1 00:08:55.338 --rc geninfo_all_blocks=1 00:08:55.338 --rc geninfo_unexecuted_blocks=1 00:08:55.338 00:08:55.338 ' 00:08:55.338 10:09:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:55.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.338 --rc genhtml_branch_coverage=1 00:08:55.338 --rc genhtml_function_coverage=1 00:08:55.338 --rc genhtml_legend=1 00:08:55.338 --rc geninfo_all_blocks=1 00:08:55.338 --rc geninfo_unexecuted_blocks=1 00:08:55.338 00:08:55.338 ' 00:08:55.338 10:09:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:55.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.338 --rc genhtml_branch_coverage=1 00:08:55.338 --rc genhtml_function_coverage=1 00:08:55.338 --rc genhtml_legend=1 00:08:55.338 --rc geninfo_all_blocks=1 00:08:55.338 --rc geninfo_unexecuted_blocks=1 00:08:55.338 00:08:55.338 ' 00:08:55.338 10:09:14 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:55.338 10:09:14 -- nvmf/common.sh@7 -- # uname -s 00:08:55.338 10:09:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.338 10:09:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.338 10:09:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.338 10:09:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.338 10:09:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.338 10:09:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.338 10:09:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.338 10:09:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.338 10:09:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.338 10:09:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.338 10:09:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:08:55.338 10:09:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:08:55.338 10:09:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.338 10:09:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.338 10:09:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:55.338 10:09:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:55.338 10:09:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.338 10:09:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.338 10:09:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.338 10:09:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.338 10:09:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.338 10:09:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.338 10:09:14 -- paths/export.sh@5 -- # export PATH 00:08:55.338 10:09:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.338 10:09:14 -- nvmf/common.sh@46 -- # : 0 00:08:55.338 10:09:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:55.338 10:09:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:55.338 10:09:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:55.338 10:09:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.338 10:09:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.338 10:09:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:55.338 10:09:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:55.338 10:09:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:55.338 10:09:14 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:55.338 10:09:14 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:55.338 10:09:14 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:55.338 10:09:14 -- target/discovery.sh@15 -- # hash nvme 00:08:55.338 10:09:14 -- target/discovery.sh@20 -- # nvmftestinit 00:08:55.338 10:09:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:55.338 10:09:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.338 10:09:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:55.338 10:09:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:55.338 10:09:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:55.338 10:09:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.338 10:09:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.338 10:09:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.338 10:09:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:55.338 10:09:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:55.338 10:09:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:55.338 10:09:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:55.338 10:09:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:55.338 10:09:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:55.338 10:09:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.338 10:09:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.338 10:09:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:55.338 10:09:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:55.338 10:09:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:55.338 10:09:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:55.338 10:09:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:55.338 10:09:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.338 10:09:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:55.338 10:09:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:55.338 10:09:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:55.338 10:09:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:55.338 10:09:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:55.338 10:09:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:55.338 Cannot find device "nvmf_tgt_br" 00:08:55.338 10:09:14 -- nvmf/common.sh@154 -- # true 00:08:55.338 10:09:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:55.338 Cannot find device "nvmf_tgt_br2" 00:08:55.338 10:09:14 -- nvmf/common.sh@155 -- # true 00:08:55.338 10:09:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:55.338 10:09:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:55.338 Cannot find device "nvmf_tgt_br" 00:08:55.338 10:09:14 -- nvmf/common.sh@157 -- # true 00:08:55.338 10:09:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:55.338 Cannot find device "nvmf_tgt_br2" 00:08:55.338 10:09:14 -- nvmf/common.sh@158 -- # true 00:08:55.338 10:09:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:55.597 10:09:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:55.597 10:09:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:55.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:55.597 10:09:14 -- nvmf/common.sh@161 -- # true 00:08:55.597 10:09:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:55.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:55.597 10:09:14 -- nvmf/common.sh@162 -- # true 00:08:55.597 10:09:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:55.597 10:09:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:55.597 10:09:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:55.597 10:09:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:55.597 10:09:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:55.597 10:09:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:55.597 10:09:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:55.597 10:09:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:55.597 10:09:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:55.597 10:09:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:55.597 10:09:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:55.597 10:09:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:55.597 10:09:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:55.597 10:09:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:55.597 10:09:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:55.597 10:09:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:55.598 10:09:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:55.598 10:09:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:55.598 10:09:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:55.598 10:09:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:55.598 10:09:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:55.598 10:09:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:55.598 10:09:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:55.598 10:09:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:55.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:08:55.598 00:08:55.598 --- 10.0.0.2 ping statistics --- 00:08:55.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.598 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:08:55.598 10:09:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:55.598 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:55.598 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:08:55.598 00:08:55.598 --- 10.0.0.3 ping statistics --- 00:08:55.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.598 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:08:55.598 10:09:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:55.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:08:55.598 00:08:55.598 --- 10.0.0.1 ping statistics --- 00:08:55.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.598 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:08:55.598 10:09:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.598 10:09:15 -- nvmf/common.sh@421 -- # return 0 00:08:55.598 10:09:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:55.598 10:09:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.598 10:09:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:55.598 10:09:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:55.598 10:09:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.598 10:09:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:55.598 10:09:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:55.858 10:09:15 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:55.858 10:09:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:55.858 10:09:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:55.858 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:55.858 10:09:15 -- nvmf/common.sh@469 -- # nvmfpid=73098 00:08:55.858 10:09:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:55.858 10:09:15 -- nvmf/common.sh@470 -- # waitforlisten 73098 00:08:55.858 10:09:15 -- common/autotest_common.sh@829 -- # '[' -z 73098 ']' 00:08:55.858 10:09:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.858 10:09:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:55.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.858 10:09:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.858 10:09:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:55.858 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:55.858 [2024-11-19 10:09:15.206800] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:55.858 [2024-11-19 10:09:15.206912] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.858 [2024-11-19 10:09:15.343141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:55.858 [2024-11-19 10:09:15.377425] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:55.858 [2024-11-19 10:09:15.377574] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.858 [2024-11-19 10:09:15.377588] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.858 [2024-11-19 10:09:15.377597] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.858 [2024-11-19 10:09:15.377765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.858 [2024-11-19 10:09:15.377810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.858 [2024-11-19 10:09:15.378520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.858 [2024-11-19 10:09:15.378564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.118 10:09:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:56.118 10:09:15 -- common/autotest_common.sh@862 -- # return 0 00:08:56.118 10:09:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:56.118 10:09:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.118 10:09:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.118 10:09:15 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:56.118 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.118 [2024-11-19 10:09:15.511869] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.118 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.118 10:09:15 -- target/discovery.sh@26 -- # seq 1 4 00:08:56.118 10:09:15 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:56.118 10:09:15 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:56.118 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.118 Null1 00:08:56.118 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.118 10:09:15 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:56.118 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.118 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.118 10:09:15 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:56.118 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.118 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.118 10:09:15 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.118 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.118 [2024-11-19 10:09:15.570312] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.118 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.118 10:09:15 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:56.118 10:09:15 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:56.118 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.118 Null2 00:08:56.118 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.118 10:09:15 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:56.118 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.118 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.118 10:09:15 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:56.118 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.118 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.118 10:09:15 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:56.118 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.118 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.118 10:09:15 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:56.118 10:09:15 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:56.118 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.118 Null3 00:08:56.118 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.118 10:09:15 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:56.118 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.118 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.118 10:09:15 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:56.118 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.118 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.118 10:09:15 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:56.118 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.118 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.118 10:09:15 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:56.118 10:09:15 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:56.118 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.118 Null4 00:08:56.118 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.118 10:09:15 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:56.118 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.118 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.118 10:09:15 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:56.118 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.118 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.378 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.378 10:09:15 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:56.378 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.378 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.378 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.378 10:09:15 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:56.378 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.378 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.378 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.378 10:09:15 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:56.378 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.378 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.378 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.378 10:09:15 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -a 10.0.0.2 -s 4420 00:08:56.378 00:08:56.378 Discovery Log Number of Records 6, Generation counter 6 00:08:56.378 =====Discovery Log Entry 0====== 00:08:56.378 trtype: tcp 00:08:56.378 adrfam: ipv4 00:08:56.378 subtype: current discovery subsystem 00:08:56.378 treq: not required 00:08:56.378 portid: 0 00:08:56.378 trsvcid: 4420 00:08:56.378 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:56.378 traddr: 10.0.0.2 00:08:56.378 eflags: explicit discovery connections, duplicate discovery information 00:08:56.378 sectype: none 00:08:56.378 =====Discovery Log Entry 1====== 00:08:56.378 trtype: tcp 00:08:56.378 adrfam: ipv4 00:08:56.378 subtype: nvme subsystem 00:08:56.378 treq: not required 00:08:56.378 portid: 0 00:08:56.378 trsvcid: 4420 00:08:56.378 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:56.378 traddr: 10.0.0.2 00:08:56.378 eflags: none 00:08:56.378 sectype: none 00:08:56.378 =====Discovery Log Entry 2====== 00:08:56.378 trtype: tcp 00:08:56.378 adrfam: ipv4 00:08:56.378 subtype: nvme subsystem 00:08:56.378 treq: not required 00:08:56.378 portid: 0 00:08:56.378 trsvcid: 4420 00:08:56.378 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:56.378 traddr: 10.0.0.2 00:08:56.378 eflags: none 00:08:56.378 sectype: none 00:08:56.378 =====Discovery Log Entry 3====== 00:08:56.378 trtype: tcp 00:08:56.378 adrfam: ipv4 00:08:56.378 subtype: nvme subsystem 00:08:56.378 treq: not required 00:08:56.378 portid: 0 00:08:56.378 trsvcid: 4420 00:08:56.378 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:56.378 traddr: 10.0.0.2 00:08:56.378 eflags: none 00:08:56.378 sectype: none 00:08:56.378 =====Discovery Log Entry 4====== 00:08:56.378 trtype: tcp 00:08:56.378 adrfam: ipv4 00:08:56.378 subtype: nvme subsystem 00:08:56.378 treq: not required 00:08:56.378 portid: 0 00:08:56.378 trsvcid: 4420 00:08:56.378 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:56.378 traddr: 10.0.0.2 00:08:56.378 eflags: none 00:08:56.378 sectype: none 00:08:56.378 =====Discovery Log Entry 5====== 00:08:56.378 trtype: tcp 00:08:56.378 adrfam: ipv4 00:08:56.378 subtype: discovery subsystem referral 00:08:56.378 treq: not required 00:08:56.378 portid: 0 00:08:56.378 trsvcid: 4430 00:08:56.378 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:56.378 traddr: 10.0.0.2 00:08:56.378 eflags: none 00:08:56.378 sectype: none 00:08:56.378 Perform nvmf subsystem discovery via RPC 00:08:56.378 10:09:15 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:56.378 10:09:15 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:56.378 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.378 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.378 [2024-11-19 10:09:15.802366] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:56.378 [ 00:08:56.378 { 00:08:56.378 "allow_any_host": true, 00:08:56.378 "hosts": [], 00:08:56.378 "listen_addresses": [ 00:08:56.378 { 00:08:56.378 "adrfam": "IPv4", 00:08:56.378 "traddr": "10.0.0.2", 00:08:56.378 "transport": "TCP", 00:08:56.378 "trsvcid": "4420", 00:08:56.378 "trtype": "TCP" 00:08:56.378 } 00:08:56.378 ], 00:08:56.378 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:56.378 "subtype": "Discovery" 00:08:56.378 }, 00:08:56.378 { 00:08:56.378 "allow_any_host": true, 00:08:56.378 "hosts": [], 00:08:56.378 "listen_addresses": [ 00:08:56.378 { 00:08:56.378 "adrfam": "IPv4", 00:08:56.378 "traddr": "10.0.0.2", 00:08:56.378 "transport": "TCP", 00:08:56.378 "trsvcid": "4420", 00:08:56.378 "trtype": "TCP" 00:08:56.378 } 00:08:56.378 ], 00:08:56.378 "max_cntlid": 65519, 00:08:56.378 "max_namespaces": 32, 00:08:56.378 "min_cntlid": 1, 00:08:56.378 "model_number": "SPDK bdev Controller", 00:08:56.378 "namespaces": [ 00:08:56.378 { 00:08:56.378 "bdev_name": "Null1", 00:08:56.378 "name": "Null1", 00:08:56.378 "nguid": "020FB8EF7E6344098352571FB8B6B12D", 00:08:56.378 "nsid": 1, 00:08:56.378 "uuid": "020fb8ef-7e63-4409-8352-571fb8b6b12d" 00:08:56.378 } 00:08:56.378 ], 00:08:56.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:56.378 "serial_number": "SPDK00000000000001", 00:08:56.378 "subtype": "NVMe" 00:08:56.378 }, 00:08:56.378 { 00:08:56.378 "allow_any_host": true, 00:08:56.378 "hosts": [], 00:08:56.378 "listen_addresses": [ 00:08:56.378 { 00:08:56.378 "adrfam": "IPv4", 00:08:56.378 "traddr": "10.0.0.2", 00:08:56.378 "transport": "TCP", 00:08:56.378 "trsvcid": "4420", 00:08:56.378 "trtype": "TCP" 00:08:56.378 } 00:08:56.378 ], 00:08:56.378 "max_cntlid": 65519, 00:08:56.379 "max_namespaces": 32, 00:08:56.379 "min_cntlid": 1, 00:08:56.379 "model_number": "SPDK bdev Controller", 00:08:56.379 "namespaces": [ 00:08:56.379 { 00:08:56.379 "bdev_name": "Null2", 00:08:56.379 "name": "Null2", 00:08:56.379 "nguid": "06AB5B46C56C4FE29CF32042FD241640", 00:08:56.379 "nsid": 1, 00:08:56.379 "uuid": "06ab5b46-c56c-4fe2-9cf3-2042fd241640" 00:08:56.379 } 00:08:56.379 ], 00:08:56.379 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:56.379 "serial_number": "SPDK00000000000002", 00:08:56.379 "subtype": "NVMe" 00:08:56.379 }, 00:08:56.379 { 00:08:56.379 "allow_any_host": true, 00:08:56.379 "hosts": [], 00:08:56.379 "listen_addresses": [ 00:08:56.379 { 00:08:56.379 "adrfam": "IPv4", 00:08:56.379 "traddr": "10.0.0.2", 00:08:56.379 "transport": "TCP", 00:08:56.379 "trsvcid": "4420", 00:08:56.379 "trtype": "TCP" 00:08:56.379 } 00:08:56.379 ], 00:08:56.379 "max_cntlid": 65519, 00:08:56.379 "max_namespaces": 32, 00:08:56.379 "min_cntlid": 1, 00:08:56.379 "model_number": "SPDK bdev Controller", 00:08:56.379 "namespaces": [ 00:08:56.379 { 00:08:56.379 "bdev_name": "Null3", 00:08:56.379 "name": "Null3", 00:08:56.379 "nguid": "20D303243D294218B611B3017F011F8B", 00:08:56.379 "nsid": 1, 00:08:56.379 "uuid": "20d30324-3d29-4218-b611-b3017f011f8b" 00:08:56.379 } 00:08:56.379 ], 00:08:56.379 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:56.379 "serial_number": "SPDK00000000000003", 00:08:56.379 "subtype": "NVMe" 00:08:56.379 }, 00:08:56.379 { 00:08:56.379 "allow_any_host": true, 00:08:56.379 "hosts": [], 00:08:56.379 "listen_addresses": [ 00:08:56.379 { 00:08:56.379 "adrfam": "IPv4", 00:08:56.379 "traddr": "10.0.0.2", 00:08:56.379 "transport": "TCP", 00:08:56.379 "trsvcid": "4420", 00:08:56.379 "trtype": "TCP" 00:08:56.379 } 00:08:56.379 ], 00:08:56.379 "max_cntlid": 65519, 00:08:56.379 "max_namespaces": 32, 00:08:56.379 "min_cntlid": 1, 00:08:56.379 "model_number": "SPDK bdev Controller", 00:08:56.379 "namespaces": [ 00:08:56.379 { 00:08:56.379 "bdev_name": "Null4", 00:08:56.379 "name": "Null4", 00:08:56.379 "nguid": "D91C52E83FE4449AAF4CFD346FE0C0FD", 00:08:56.379 "nsid": 1, 00:08:56.379 "uuid": "d91c52e8-3fe4-449a-af4c-fd346fe0c0fd" 00:08:56.379 } 00:08:56.379 ], 00:08:56.379 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:56.379 "serial_number": "SPDK00000000000004", 00:08:56.379 "subtype": "NVMe" 00:08:56.379 } 00:08:56.379 ] 00:08:56.379 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.379 10:09:15 -- target/discovery.sh@42 -- # seq 1 4 00:08:56.379 10:09:15 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:56.379 10:09:15 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.379 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.379 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.379 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.379 10:09:15 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:56.379 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.379 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.379 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.379 10:09:15 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:56.379 10:09:15 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:56.379 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.379 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.379 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.379 10:09:15 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:56.379 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.379 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.379 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.379 10:09:15 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:56.379 10:09:15 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:56.379 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.379 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.379 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.379 10:09:15 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:56.379 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.379 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.379 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.379 10:09:15 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:56.379 10:09:15 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:56.379 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.379 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.379 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.379 10:09:15 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:56.379 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.379 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.379 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.379 10:09:15 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:56.379 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.379 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.379 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.379 10:09:15 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:56.379 10:09:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.379 10:09:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.379 10:09:15 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:56.379 10:09:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.638 10:09:15 -- target/discovery.sh@49 -- # check_bdevs= 00:08:56.638 10:09:15 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:56.638 10:09:15 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:56.638 10:09:15 -- target/discovery.sh@57 -- # nvmftestfini 00:08:56.638 10:09:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:56.638 10:09:15 -- nvmf/common.sh@116 -- # sync 00:08:56.638 10:09:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:56.638 10:09:15 -- nvmf/common.sh@119 -- # set +e 00:08:56.638 10:09:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:56.638 10:09:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:56.638 rmmod nvme_tcp 00:08:56.638 rmmod nvme_fabrics 00:08:56.638 rmmod nvme_keyring 00:08:56.638 10:09:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:56.638 10:09:16 -- nvmf/common.sh@123 -- # set -e 00:08:56.638 10:09:16 -- nvmf/common.sh@124 -- # return 0 00:08:56.638 10:09:16 -- nvmf/common.sh@477 -- # '[' -n 73098 ']' 00:08:56.638 10:09:16 -- nvmf/common.sh@478 -- # killprocess 73098 00:08:56.638 10:09:16 -- common/autotest_common.sh@936 -- # '[' -z 73098 ']' 00:08:56.638 10:09:16 -- common/autotest_common.sh@940 -- # kill -0 73098 00:08:56.638 10:09:16 -- common/autotest_common.sh@941 -- # uname 00:08:56.638 10:09:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:56.639 10:09:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73098 00:08:56.639 10:09:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:56.639 10:09:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:56.639 killing process with pid 73098 00:08:56.639 10:09:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73098' 00:08:56.639 10:09:16 -- common/autotest_common.sh@955 -- # kill 73098 00:08:56.639 [2024-11-19 10:09:16.061388] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:56.639 10:09:16 -- common/autotest_common.sh@960 -- # wait 73098 00:08:56.898 10:09:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:56.898 10:09:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:56.898 10:09:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:56.898 10:09:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:56.898 10:09:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:56.898 10:09:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.898 10:09:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.898 10:09:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.898 10:09:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:56.898 00:08:56.898 real 0m1.650s 00:08:56.898 user 0m3.520s 00:08:56.898 sys 0m0.503s 00:08:56.898 10:09:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:56.898 10:09:16 -- common/autotest_common.sh@10 -- # set +x 00:08:56.898 ************************************ 00:08:56.898 END TEST nvmf_discovery 00:08:56.898 ************************************ 00:08:56.898 10:09:16 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:56.898 10:09:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:56.898 10:09:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.898 10:09:16 -- common/autotest_common.sh@10 -- # set +x 00:08:56.898 ************************************ 00:08:56.898 START TEST nvmf_referrals 00:08:56.898 ************************************ 00:08:56.898 10:09:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:56.898 * Looking for test storage... 00:08:56.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:56.898 10:09:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:56.898 10:09:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:56.898 10:09:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:57.158 10:09:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:57.158 10:09:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:57.158 10:09:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:57.158 10:09:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:57.158 10:09:16 -- scripts/common.sh@335 -- # IFS=.-: 00:08:57.158 10:09:16 -- scripts/common.sh@335 -- # read -ra ver1 00:08:57.158 10:09:16 -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.158 10:09:16 -- scripts/common.sh@336 -- # read -ra ver2 00:08:57.158 10:09:16 -- scripts/common.sh@337 -- # local 'op=<' 00:08:57.158 10:09:16 -- scripts/common.sh@339 -- # ver1_l=2 00:08:57.158 10:09:16 -- scripts/common.sh@340 -- # ver2_l=1 00:08:57.158 10:09:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:57.158 10:09:16 -- scripts/common.sh@343 -- # case "$op" in 00:08:57.158 10:09:16 -- scripts/common.sh@344 -- # : 1 00:08:57.158 10:09:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:57.158 10:09:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.158 10:09:16 -- scripts/common.sh@364 -- # decimal 1 00:08:57.158 10:09:16 -- scripts/common.sh@352 -- # local d=1 00:08:57.158 10:09:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.158 10:09:16 -- scripts/common.sh@354 -- # echo 1 00:08:57.158 10:09:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:57.158 10:09:16 -- scripts/common.sh@365 -- # decimal 2 00:08:57.158 10:09:16 -- scripts/common.sh@352 -- # local d=2 00:08:57.158 10:09:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.158 10:09:16 -- scripts/common.sh@354 -- # echo 2 00:08:57.158 10:09:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:57.158 10:09:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:57.158 10:09:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:57.158 10:09:16 -- scripts/common.sh@367 -- # return 0 00:08:57.158 10:09:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.158 10:09:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:57.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.158 --rc genhtml_branch_coverage=1 00:08:57.158 --rc genhtml_function_coverage=1 00:08:57.158 --rc genhtml_legend=1 00:08:57.158 --rc geninfo_all_blocks=1 00:08:57.158 --rc geninfo_unexecuted_blocks=1 00:08:57.158 00:08:57.158 ' 00:08:57.158 10:09:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:57.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.158 --rc genhtml_branch_coverage=1 00:08:57.158 --rc genhtml_function_coverage=1 00:08:57.158 --rc genhtml_legend=1 00:08:57.158 --rc geninfo_all_blocks=1 00:08:57.158 --rc geninfo_unexecuted_blocks=1 00:08:57.158 00:08:57.158 ' 00:08:57.158 10:09:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:57.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.158 --rc genhtml_branch_coverage=1 00:08:57.158 --rc genhtml_function_coverage=1 00:08:57.158 --rc genhtml_legend=1 00:08:57.158 --rc geninfo_all_blocks=1 00:08:57.158 --rc geninfo_unexecuted_blocks=1 00:08:57.158 00:08:57.158 ' 00:08:57.158 10:09:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:57.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.158 --rc genhtml_branch_coverage=1 00:08:57.158 --rc genhtml_function_coverage=1 00:08:57.158 --rc genhtml_legend=1 00:08:57.158 --rc geninfo_all_blocks=1 00:08:57.158 --rc geninfo_unexecuted_blocks=1 00:08:57.158 00:08:57.158 ' 00:08:57.158 10:09:16 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:57.158 10:09:16 -- nvmf/common.sh@7 -- # uname -s 00:08:57.158 10:09:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.158 10:09:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.158 10:09:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.158 10:09:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.158 10:09:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.158 10:09:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.158 10:09:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.158 10:09:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.158 10:09:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.158 10:09:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.158 10:09:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:08:57.158 10:09:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:08:57.158 10:09:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.158 10:09:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.158 10:09:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:57.158 10:09:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:57.158 10:09:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.158 10:09:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.158 10:09:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.158 10:09:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.159 10:09:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.159 10:09:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.159 10:09:16 -- paths/export.sh@5 -- # export PATH 00:08:57.159 10:09:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.159 10:09:16 -- nvmf/common.sh@46 -- # : 0 00:08:57.159 10:09:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:57.159 10:09:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:57.159 10:09:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:57.159 10:09:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.159 10:09:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.159 10:09:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:57.159 10:09:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:57.159 10:09:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:57.159 10:09:16 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:57.159 10:09:16 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:57.159 10:09:16 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:57.159 10:09:16 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:57.159 10:09:16 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:57.159 10:09:16 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:57.159 10:09:16 -- target/referrals.sh@37 -- # nvmftestinit 00:08:57.159 10:09:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:57.159 10:09:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.159 10:09:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:57.159 10:09:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:57.159 10:09:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:57.159 10:09:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.159 10:09:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.159 10:09:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.159 10:09:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:57.159 10:09:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:57.159 10:09:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:57.159 10:09:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:57.159 10:09:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:57.159 10:09:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:57.159 10:09:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.159 10:09:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.159 10:09:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:57.159 10:09:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:57.159 10:09:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:57.159 10:09:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:57.159 10:09:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:57.159 10:09:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.159 10:09:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:57.159 10:09:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:57.159 10:09:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:57.159 10:09:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:57.159 10:09:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:57.159 10:09:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:57.159 Cannot find device "nvmf_tgt_br" 00:08:57.159 10:09:16 -- nvmf/common.sh@154 -- # true 00:08:57.159 10:09:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:57.159 Cannot find device "nvmf_tgt_br2" 00:08:57.159 10:09:16 -- nvmf/common.sh@155 -- # true 00:08:57.159 10:09:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:57.159 10:09:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:57.159 Cannot find device "nvmf_tgt_br" 00:08:57.159 10:09:16 -- nvmf/common.sh@157 -- # true 00:08:57.159 10:09:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:57.159 Cannot find device "nvmf_tgt_br2" 00:08:57.159 10:09:16 -- nvmf/common.sh@158 -- # true 00:08:57.159 10:09:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:57.159 10:09:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:57.159 10:09:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:57.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:57.159 10:09:16 -- nvmf/common.sh@161 -- # true 00:08:57.159 10:09:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:57.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:57.159 10:09:16 -- nvmf/common.sh@162 -- # true 00:08:57.159 10:09:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:57.159 10:09:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:57.159 10:09:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:57.159 10:09:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:57.159 10:09:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:57.159 10:09:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:57.418 10:09:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:57.418 10:09:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:57.418 10:09:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:57.418 10:09:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:57.418 10:09:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:57.418 10:09:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:57.418 10:09:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:57.418 10:09:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:57.418 10:09:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:57.418 10:09:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:57.418 10:09:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:57.418 10:09:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:57.418 10:09:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:57.418 10:09:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:57.418 10:09:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:57.418 10:09:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:57.418 10:09:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:57.418 10:09:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:57.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:08:57.418 00:08:57.419 --- 10.0.0.2 ping statistics --- 00:08:57.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.419 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:08:57.419 10:09:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:57.419 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:57.419 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:08:57.419 00:08:57.419 --- 10.0.0.3 ping statistics --- 00:08:57.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.419 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:57.419 10:09:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:57.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:57.419 00:08:57.419 --- 10.0.0.1 ping statistics --- 00:08:57.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.419 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:57.419 10:09:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.419 10:09:16 -- nvmf/common.sh@421 -- # return 0 00:08:57.419 10:09:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:57.419 10:09:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.419 10:09:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:57.419 10:09:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:57.419 10:09:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.419 10:09:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:57.419 10:09:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:57.419 10:09:16 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:57.419 10:09:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:57.419 10:09:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:57.419 10:09:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.419 10:09:16 -- nvmf/common.sh@469 -- # nvmfpid=73314 00:08:57.419 10:09:16 -- nvmf/common.sh@470 -- # waitforlisten 73314 00:08:57.419 10:09:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:57.419 10:09:16 -- common/autotest_common.sh@829 -- # '[' -z 73314 ']' 00:08:57.419 10:09:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.419 10:09:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.419 10:09:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.419 10:09:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.419 10:09:16 -- common/autotest_common.sh@10 -- # set +x 00:08:57.419 [2024-11-19 10:09:16.912734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:57.419 [2024-11-19 10:09:16.912844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.677 [2024-11-19 10:09:17.052291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.677 [2024-11-19 10:09:17.091308] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:57.677 [2024-11-19 10:09:17.091467] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.677 [2024-11-19 10:09:17.091483] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.677 [2024-11-19 10:09:17.091494] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.677 [2024-11-19 10:09:17.091622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.677 [2024-11-19 10:09:17.091699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.677 [2024-11-19 10:09:17.091851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.677 [2024-11-19 10:09:17.091866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.660 10:09:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.660 10:09:17 -- common/autotest_common.sh@862 -- # return 0 00:08:58.660 10:09:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:58.660 10:09:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:58.660 10:09:17 -- common/autotest_common.sh@10 -- # set +x 00:08:58.660 10:09:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.660 10:09:17 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:58.660 10:09:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.660 10:09:17 -- common/autotest_common.sh@10 -- # set +x 00:08:58.660 [2024-11-19 10:09:17.966810] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.660 10:09:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.660 10:09:17 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:58.660 10:09:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.660 10:09:17 -- common/autotest_common.sh@10 -- # set +x 00:08:58.660 [2024-11-19 10:09:17.986796] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:58.660 10:09:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.660 10:09:17 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:58.660 10:09:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.660 10:09:17 -- common/autotest_common.sh@10 -- # set +x 00:08:58.660 10:09:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.660 10:09:17 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:58.660 10:09:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.660 10:09:17 -- common/autotest_common.sh@10 -- # set +x 00:08:58.660 10:09:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.660 10:09:18 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:58.660 10:09:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.660 10:09:18 -- common/autotest_common.sh@10 -- # set +x 00:08:58.660 10:09:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.660 10:09:18 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.660 10:09:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.660 10:09:18 -- common/autotest_common.sh@10 -- # set +x 00:08:58.660 10:09:18 -- target/referrals.sh@48 -- # jq length 00:08:58.660 10:09:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.660 10:09:18 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:58.660 10:09:18 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:58.660 10:09:18 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:58.660 10:09:18 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.660 10:09:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.660 10:09:18 -- common/autotest_common.sh@10 -- # set +x 00:08:58.660 10:09:18 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:58.660 10:09:18 -- target/referrals.sh@21 -- # sort 00:08:58.660 10:09:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.660 10:09:18 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:58.660 10:09:18 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:58.660 10:09:18 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:58.660 10:09:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:58.660 10:09:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:58.660 10:09:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.660 10:09:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:58.660 10:09:18 -- target/referrals.sh@26 -- # sort 00:08:58.919 10:09:18 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:58.919 10:09:18 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:58.919 10:09:18 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:58.919 10:09:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.919 10:09:18 -- common/autotest_common.sh@10 -- # set +x 00:08:58.919 10:09:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.919 10:09:18 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:58.919 10:09:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.919 10:09:18 -- common/autotest_common.sh@10 -- # set +x 00:08:58.919 10:09:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.919 10:09:18 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:58.919 10:09:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.919 10:09:18 -- common/autotest_common.sh@10 -- # set +x 00:08:58.919 10:09:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.919 10:09:18 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.919 10:09:18 -- target/referrals.sh@56 -- # jq length 00:08:58.919 10:09:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.919 10:09:18 -- common/autotest_common.sh@10 -- # set +x 00:08:58.919 10:09:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.919 10:09:18 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:58.919 10:09:18 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:58.919 10:09:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:58.919 10:09:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:58.919 10:09:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.919 10:09:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:58.919 10:09:18 -- target/referrals.sh@26 -- # sort 00:08:59.180 10:09:18 -- target/referrals.sh@26 -- # echo 00:08:59.180 10:09:18 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:59.180 10:09:18 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:59.180 10:09:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.180 10:09:18 -- common/autotest_common.sh@10 -- # set +x 00:08:59.180 10:09:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.180 10:09:18 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:59.180 10:09:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.180 10:09:18 -- common/autotest_common.sh@10 -- # set +x 00:08:59.180 10:09:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.180 10:09:18 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:59.180 10:09:18 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:59.180 10:09:18 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:59.180 10:09:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.180 10:09:18 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:59.180 10:09:18 -- common/autotest_common.sh@10 -- # set +x 00:08:59.180 10:09:18 -- target/referrals.sh@21 -- # sort 00:08:59.180 10:09:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.180 10:09:18 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:59.180 10:09:18 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:59.180 10:09:18 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:59.180 10:09:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:59.180 10:09:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:59.180 10:09:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.180 10:09:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:59.180 10:09:18 -- target/referrals.sh@26 -- # sort 00:08:59.180 10:09:18 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:59.180 10:09:18 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:59.180 10:09:18 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:59.180 10:09:18 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:59.180 10:09:18 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:59.180 10:09:18 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.180 10:09:18 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:59.438 10:09:18 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:59.438 10:09:18 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:59.438 10:09:18 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:59.438 10:09:18 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:59.438 10:09:18 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:59.438 10:09:18 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.438 10:09:18 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:59.438 10:09:18 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:59.438 10:09:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.438 10:09:18 -- common/autotest_common.sh@10 -- # set +x 00:08:59.438 10:09:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.438 10:09:18 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:59.438 10:09:18 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:59.438 10:09:18 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:59.438 10:09:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.438 10:09:18 -- common/autotest_common.sh@10 -- # set +x 00:08:59.438 10:09:18 -- target/referrals.sh@21 -- # sort 00:08:59.438 10:09:18 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:59.438 10:09:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.438 10:09:18 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:59.438 10:09:18 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:59.438 10:09:18 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:59.438 10:09:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:59.438 10:09:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:59.438 10:09:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.438 10:09:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:59.438 10:09:18 -- target/referrals.sh@26 -- # sort 00:08:59.697 10:09:19 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:59.697 10:09:19 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:59.697 10:09:19 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:59.697 10:09:19 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:59.697 10:09:19 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:59.697 10:09:19 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:59.697 10:09:19 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.697 10:09:19 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:59.697 10:09:19 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:59.697 10:09:19 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:59.697 10:09:19 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:59.697 10:09:19 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.697 10:09:19 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:59.955 10:09:19 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:59.955 10:09:19 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:59.955 10:09:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.955 10:09:19 -- common/autotest_common.sh@10 -- # set +x 00:08:59.955 10:09:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.955 10:09:19 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:59.955 10:09:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.955 10:09:19 -- common/autotest_common.sh@10 -- # set +x 00:08:59.955 10:09:19 -- target/referrals.sh@82 -- # jq length 00:08:59.955 10:09:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.955 10:09:19 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:59.955 10:09:19 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:59.955 10:09:19 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:59.955 10:09:19 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:59.955 10:09:19 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.955 10:09:19 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:59.955 10:09:19 -- target/referrals.sh@26 -- # sort 00:08:59.955 10:09:19 -- target/referrals.sh@26 -- # echo 00:09:00.214 10:09:19 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:00.214 10:09:19 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:00.214 10:09:19 -- target/referrals.sh@86 -- # nvmftestfini 00:09:00.214 10:09:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:00.214 10:09:19 -- nvmf/common.sh@116 -- # sync 00:09:00.214 10:09:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:00.214 10:09:19 -- nvmf/common.sh@119 -- # set +e 00:09:00.214 10:09:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:00.214 10:09:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:00.214 rmmod nvme_tcp 00:09:00.214 rmmod nvme_fabrics 00:09:00.214 rmmod nvme_keyring 00:09:00.214 10:09:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:00.214 10:09:19 -- nvmf/common.sh@123 -- # set -e 00:09:00.214 10:09:19 -- nvmf/common.sh@124 -- # return 0 00:09:00.214 10:09:19 -- nvmf/common.sh@477 -- # '[' -n 73314 ']' 00:09:00.214 10:09:19 -- nvmf/common.sh@478 -- # killprocess 73314 00:09:00.214 10:09:19 -- common/autotest_common.sh@936 -- # '[' -z 73314 ']' 00:09:00.214 10:09:19 -- common/autotest_common.sh@940 -- # kill -0 73314 00:09:00.214 10:09:19 -- common/autotest_common.sh@941 -- # uname 00:09:00.214 10:09:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:00.214 10:09:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73314 00:09:00.214 10:09:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:00.214 10:09:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:00.214 killing process with pid 73314 00:09:00.214 10:09:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73314' 00:09:00.215 10:09:19 -- common/autotest_common.sh@955 -- # kill 73314 00:09:00.215 10:09:19 -- common/autotest_common.sh@960 -- # wait 73314 00:09:00.473 10:09:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:00.473 10:09:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:00.473 10:09:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:00.473 10:09:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.473 10:09:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:00.473 10:09:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.473 10:09:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.473 10:09:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.473 10:09:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:00.473 ************************************ 00:09:00.473 END TEST nvmf_referrals 00:09:00.473 ************************************ 00:09:00.473 00:09:00.473 real 0m3.545s 00:09:00.473 user 0m12.014s 00:09:00.473 sys 0m0.831s 00:09:00.473 10:09:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:00.473 10:09:19 -- common/autotest_common.sh@10 -- # set +x 00:09:00.473 10:09:19 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:00.473 10:09:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:00.473 10:09:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.473 10:09:19 -- common/autotest_common.sh@10 -- # set +x 00:09:00.473 ************************************ 00:09:00.473 START TEST nvmf_connect_disconnect 00:09:00.473 ************************************ 00:09:00.473 10:09:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:00.473 * Looking for test storage... 00:09:00.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:00.473 10:09:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:00.473 10:09:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:00.473 10:09:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:00.473 10:09:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:00.473 10:09:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:00.473 10:09:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:00.473 10:09:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:00.473 10:09:20 -- scripts/common.sh@335 -- # IFS=.-: 00:09:00.474 10:09:20 -- scripts/common.sh@335 -- # read -ra ver1 00:09:00.474 10:09:20 -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.474 10:09:20 -- scripts/common.sh@336 -- # read -ra ver2 00:09:00.474 10:09:20 -- scripts/common.sh@337 -- # local 'op=<' 00:09:00.474 10:09:20 -- scripts/common.sh@339 -- # ver1_l=2 00:09:00.474 10:09:20 -- scripts/common.sh@340 -- # ver2_l=1 00:09:00.474 10:09:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:00.474 10:09:20 -- scripts/common.sh@343 -- # case "$op" in 00:09:00.474 10:09:20 -- scripts/common.sh@344 -- # : 1 00:09:00.474 10:09:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:00.474 10:09:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.474 10:09:20 -- scripts/common.sh@364 -- # decimal 1 00:09:00.474 10:09:20 -- scripts/common.sh@352 -- # local d=1 00:09:00.474 10:09:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.474 10:09:20 -- scripts/common.sh@354 -- # echo 1 00:09:00.732 10:09:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:00.732 10:09:20 -- scripts/common.sh@365 -- # decimal 2 00:09:00.732 10:09:20 -- scripts/common.sh@352 -- # local d=2 00:09:00.732 10:09:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.732 10:09:20 -- scripts/common.sh@354 -- # echo 2 00:09:00.732 10:09:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:00.732 10:09:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:00.732 10:09:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:00.732 10:09:20 -- scripts/common.sh@367 -- # return 0 00:09:00.732 10:09:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.732 10:09:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:00.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.732 --rc genhtml_branch_coverage=1 00:09:00.732 --rc genhtml_function_coverage=1 00:09:00.732 --rc genhtml_legend=1 00:09:00.732 --rc geninfo_all_blocks=1 00:09:00.732 --rc geninfo_unexecuted_blocks=1 00:09:00.732 00:09:00.732 ' 00:09:00.732 10:09:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:00.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.732 --rc genhtml_branch_coverage=1 00:09:00.732 --rc genhtml_function_coverage=1 00:09:00.732 --rc genhtml_legend=1 00:09:00.732 --rc geninfo_all_blocks=1 00:09:00.732 --rc geninfo_unexecuted_blocks=1 00:09:00.732 00:09:00.732 ' 00:09:00.732 10:09:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:00.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.732 --rc genhtml_branch_coverage=1 00:09:00.732 --rc genhtml_function_coverage=1 00:09:00.732 --rc genhtml_legend=1 00:09:00.732 --rc geninfo_all_blocks=1 00:09:00.732 --rc geninfo_unexecuted_blocks=1 00:09:00.732 00:09:00.732 ' 00:09:00.732 10:09:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:00.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.732 --rc genhtml_branch_coverage=1 00:09:00.732 --rc genhtml_function_coverage=1 00:09:00.732 --rc genhtml_legend=1 00:09:00.732 --rc geninfo_all_blocks=1 00:09:00.733 --rc geninfo_unexecuted_blocks=1 00:09:00.733 00:09:00.733 ' 00:09:00.733 10:09:20 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:00.733 10:09:20 -- nvmf/common.sh@7 -- # uname -s 00:09:00.733 10:09:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.733 10:09:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.733 10:09:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.733 10:09:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.733 10:09:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.733 10:09:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.733 10:09:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.733 10:09:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.733 10:09:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.733 10:09:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.733 10:09:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:09:00.733 10:09:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:09:00.733 10:09:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.733 10:09:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.733 10:09:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:00.733 10:09:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:00.733 10:09:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.733 10:09:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.733 10:09:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.733 10:09:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.733 10:09:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.733 10:09:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.733 10:09:20 -- paths/export.sh@5 -- # export PATH 00:09:00.733 10:09:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.733 10:09:20 -- nvmf/common.sh@46 -- # : 0 00:09:00.733 10:09:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:00.733 10:09:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:00.733 10:09:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:00.733 10:09:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.733 10:09:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.733 10:09:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:00.733 10:09:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:00.733 10:09:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:00.733 10:09:20 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:00.733 10:09:20 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:00.733 10:09:20 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:00.733 10:09:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:00.733 10:09:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.733 10:09:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:00.733 10:09:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:00.733 10:09:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:00.733 10:09:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.733 10:09:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.733 10:09:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.733 10:09:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:00.733 10:09:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:00.733 10:09:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:00.733 10:09:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:00.733 10:09:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:00.733 10:09:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:00.733 10:09:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.733 10:09:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.733 10:09:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:00.733 10:09:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:00.733 10:09:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:00.733 10:09:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:00.733 10:09:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:00.733 10:09:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.733 10:09:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:00.733 10:09:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:00.733 10:09:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:00.733 10:09:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:00.733 10:09:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:00.733 10:09:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:00.733 Cannot find device "nvmf_tgt_br" 00:09:00.733 10:09:20 -- nvmf/common.sh@154 -- # true 00:09:00.733 10:09:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:00.733 Cannot find device "nvmf_tgt_br2" 00:09:00.733 10:09:20 -- nvmf/common.sh@155 -- # true 00:09:00.733 10:09:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:00.733 10:09:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:00.733 Cannot find device "nvmf_tgt_br" 00:09:00.733 10:09:20 -- nvmf/common.sh@157 -- # true 00:09:00.733 10:09:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:00.733 Cannot find device "nvmf_tgt_br2" 00:09:00.733 10:09:20 -- nvmf/common.sh@158 -- # true 00:09:00.733 10:09:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:00.733 10:09:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:00.733 10:09:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:00.733 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.733 10:09:20 -- nvmf/common.sh@161 -- # true 00:09:00.733 10:09:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:00.733 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:00.733 10:09:20 -- nvmf/common.sh@162 -- # true 00:09:00.733 10:09:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:00.733 10:09:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:00.733 10:09:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:00.733 10:09:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:00.733 10:09:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:00.733 10:09:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:00.733 10:09:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:00.733 10:09:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:00.993 10:09:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:00.993 10:09:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:00.993 10:09:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:00.993 10:09:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:00.993 10:09:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:00.993 10:09:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:00.993 10:09:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:00.993 10:09:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:00.993 10:09:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:00.993 10:09:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:00.993 10:09:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:00.993 10:09:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:00.993 10:09:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:00.993 10:09:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:00.993 10:09:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:00.993 10:09:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:00.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:09:00.993 00:09:00.993 --- 10.0.0.2 ping statistics --- 00:09:00.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.993 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:00.993 10:09:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:00.993 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:00.993 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:09:00.993 00:09:00.993 --- 10.0.0.3 ping statistics --- 00:09:00.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.993 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:00.993 10:09:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:00.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:09:00.993 00:09:00.993 --- 10.0.0.1 ping statistics --- 00:09:00.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.993 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:00.993 10:09:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.993 10:09:20 -- nvmf/common.sh@421 -- # return 0 00:09:00.993 10:09:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:00.993 10:09:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.993 10:09:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:00.993 10:09:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:00.993 10:09:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.993 10:09:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:00.993 10:09:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:00.993 10:09:20 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:00.993 10:09:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:00.993 10:09:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:00.993 10:09:20 -- common/autotest_common.sh@10 -- # set +x 00:09:00.993 10:09:20 -- nvmf/common.sh@469 -- # nvmfpid=73631 00:09:00.993 10:09:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:00.993 10:09:20 -- nvmf/common.sh@470 -- # waitforlisten 73631 00:09:00.993 10:09:20 -- common/autotest_common.sh@829 -- # '[' -z 73631 ']' 00:09:00.993 10:09:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.993 10:09:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:00.993 10:09:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.993 10:09:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:00.993 10:09:20 -- common/autotest_common.sh@10 -- # set +x 00:09:00.993 [2024-11-19 10:09:20.467188] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:00.993 [2024-11-19 10:09:20.467283] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.251 [2024-11-19 10:09:20.602605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:01.251 [2024-11-19 10:09:20.638310] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:01.251 [2024-11-19 10:09:20.638488] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.251 [2024-11-19 10:09:20.638510] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.251 [2024-11-19 10:09:20.638523] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.251 [2024-11-19 10:09:20.638694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.251 [2024-11-19 10:09:20.638809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.251 [2024-11-19 10:09:20.639432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.251 [2024-11-19 10:09:20.639465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.186 10:09:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.186 10:09:21 -- common/autotest_common.sh@862 -- # return 0 00:09:02.186 10:09:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:02.186 10:09:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:02.186 10:09:21 -- common/autotest_common.sh@10 -- # set +x 00:09:02.186 10:09:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.186 10:09:21 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:02.186 10:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.186 10:09:21 -- common/autotest_common.sh@10 -- # set +x 00:09:02.186 [2024-11-19 10:09:21.528467] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.186 10:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.186 10:09:21 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:02.186 10:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.186 10:09:21 -- common/autotest_common.sh@10 -- # set +x 00:09:02.186 10:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.186 10:09:21 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:02.186 10:09:21 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:02.186 10:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.186 10:09:21 -- common/autotest_common.sh@10 -- # set +x 00:09:02.186 10:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.186 10:09:21 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:02.186 10:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.186 10:09:21 -- common/autotest_common.sh@10 -- # set +x 00:09:02.186 10:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.186 10:09:21 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.186 10:09:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.186 10:09:21 -- common/autotest_common.sh@10 -- # set +x 00:09:02.186 [2024-11-19 10:09:21.590764] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.186 10:09:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.186 10:09:21 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:02.186 10:09:21 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:02.186 10:09:21 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:02.186 10:09:21 -- target/connect_disconnect.sh@34 -- # set +x 00:09:04.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.254 10:13:04 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:45.254 10:13:04 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:45.254 10:13:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:45.254 10:13:04 -- nvmf/common.sh@116 -- # sync 00:12:45.254 10:13:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:45.254 10:13:04 -- nvmf/common.sh@119 -- # set +e 00:12:45.254 10:13:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:45.254 10:13:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:45.254 rmmod nvme_tcp 00:12:45.254 rmmod nvme_fabrics 00:12:45.255 rmmod nvme_keyring 00:12:45.255 10:13:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:45.255 10:13:04 -- nvmf/common.sh@123 -- # set -e 00:12:45.255 10:13:04 -- nvmf/common.sh@124 -- # return 0 00:12:45.255 10:13:04 -- nvmf/common.sh@477 -- # '[' -n 73631 ']' 00:12:45.255 10:13:04 -- nvmf/common.sh@478 -- # killprocess 73631 00:12:45.255 10:13:04 -- common/autotest_common.sh@936 -- # '[' -z 73631 ']' 00:12:45.255 10:13:04 -- common/autotest_common.sh@940 -- # kill -0 73631 00:12:45.255 10:13:04 -- common/autotest_common.sh@941 -- # uname 00:12:45.255 10:13:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:45.255 10:13:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73631 00:12:45.255 killing process with pid 73631 00:12:45.255 10:13:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:45.255 10:13:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:45.255 10:13:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73631' 00:12:45.255 10:13:04 -- common/autotest_common.sh@955 -- # kill 73631 00:12:45.255 10:13:04 -- common/autotest_common.sh@960 -- # wait 73631 00:12:45.513 10:13:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:45.513 10:13:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:45.513 10:13:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:45.513 10:13:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:45.513 10:13:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:45.513 10:13:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.513 10:13:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.513 10:13:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.513 10:13:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:45.513 00:12:45.513 real 3m45.010s 00:12:45.513 user 14m31.021s 00:12:45.513 sys 0m26.447s 00:12:45.513 ************************************ 00:12:45.513 END TEST nvmf_connect_disconnect 00:12:45.513 ************************************ 00:12:45.513 10:13:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:45.513 10:13:04 -- common/autotest_common.sh@10 -- # set +x 00:12:45.513 10:13:04 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:45.513 10:13:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:45.513 10:13:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:45.513 10:13:04 -- common/autotest_common.sh@10 -- # set +x 00:12:45.513 ************************************ 00:12:45.513 START TEST nvmf_multitarget 00:12:45.513 ************************************ 00:12:45.513 10:13:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:45.513 * Looking for test storage... 00:12:45.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:45.513 10:13:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:45.513 10:13:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:45.513 10:13:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:45.772 10:13:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:45.772 10:13:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:45.772 10:13:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:45.772 10:13:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:45.772 10:13:05 -- scripts/common.sh@335 -- # IFS=.-: 00:12:45.772 10:13:05 -- scripts/common.sh@335 -- # read -ra ver1 00:12:45.772 10:13:05 -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.772 10:13:05 -- scripts/common.sh@336 -- # read -ra ver2 00:12:45.772 10:13:05 -- scripts/common.sh@337 -- # local 'op=<' 00:12:45.772 10:13:05 -- scripts/common.sh@339 -- # ver1_l=2 00:12:45.772 10:13:05 -- scripts/common.sh@340 -- # ver2_l=1 00:12:45.772 10:13:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:45.772 10:13:05 -- scripts/common.sh@343 -- # case "$op" in 00:12:45.772 10:13:05 -- scripts/common.sh@344 -- # : 1 00:12:45.772 10:13:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:45.772 10:13:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.772 10:13:05 -- scripts/common.sh@364 -- # decimal 1 00:12:45.772 10:13:05 -- scripts/common.sh@352 -- # local d=1 00:12:45.772 10:13:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.772 10:13:05 -- scripts/common.sh@354 -- # echo 1 00:12:45.772 10:13:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:45.772 10:13:05 -- scripts/common.sh@365 -- # decimal 2 00:12:45.772 10:13:05 -- scripts/common.sh@352 -- # local d=2 00:12:45.772 10:13:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.772 10:13:05 -- scripts/common.sh@354 -- # echo 2 00:12:45.772 10:13:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:45.772 10:13:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:45.772 10:13:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:45.772 10:13:05 -- scripts/common.sh@367 -- # return 0 00:12:45.772 10:13:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.772 10:13:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:45.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.772 --rc genhtml_branch_coverage=1 00:12:45.772 --rc genhtml_function_coverage=1 00:12:45.772 --rc genhtml_legend=1 00:12:45.772 --rc geninfo_all_blocks=1 00:12:45.772 --rc geninfo_unexecuted_blocks=1 00:12:45.772 00:12:45.772 ' 00:12:45.772 10:13:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:45.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.772 --rc genhtml_branch_coverage=1 00:12:45.772 --rc genhtml_function_coverage=1 00:12:45.772 --rc genhtml_legend=1 00:12:45.772 --rc geninfo_all_blocks=1 00:12:45.772 --rc geninfo_unexecuted_blocks=1 00:12:45.772 00:12:45.772 ' 00:12:45.772 10:13:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:45.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.772 --rc genhtml_branch_coverage=1 00:12:45.772 --rc genhtml_function_coverage=1 00:12:45.772 --rc genhtml_legend=1 00:12:45.772 --rc geninfo_all_blocks=1 00:12:45.772 --rc geninfo_unexecuted_blocks=1 00:12:45.772 00:12:45.772 ' 00:12:45.773 10:13:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:45.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.773 --rc genhtml_branch_coverage=1 00:12:45.773 --rc genhtml_function_coverage=1 00:12:45.773 --rc genhtml_legend=1 00:12:45.773 --rc geninfo_all_blocks=1 00:12:45.773 --rc geninfo_unexecuted_blocks=1 00:12:45.773 00:12:45.773 ' 00:12:45.773 10:13:05 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:45.773 10:13:05 -- nvmf/common.sh@7 -- # uname -s 00:12:45.773 10:13:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.773 10:13:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.773 10:13:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.773 10:13:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.773 10:13:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.773 10:13:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.773 10:13:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.773 10:13:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.773 10:13:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.773 10:13:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.773 10:13:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:12:45.773 10:13:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:12:45.773 10:13:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.773 10:13:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.773 10:13:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:45.773 10:13:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:45.773 10:13:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.773 10:13:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.773 10:13:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.773 10:13:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.773 10:13:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.773 10:13:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.773 10:13:05 -- paths/export.sh@5 -- # export PATH 00:12:45.773 10:13:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.773 10:13:05 -- nvmf/common.sh@46 -- # : 0 00:12:45.773 10:13:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:45.773 10:13:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:45.773 10:13:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:45.773 10:13:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.773 10:13:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.773 10:13:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:45.773 10:13:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:45.773 10:13:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:45.773 10:13:05 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:45.773 10:13:05 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:45.773 10:13:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:45.773 10:13:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.773 10:13:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:45.773 10:13:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:45.773 10:13:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:45.773 10:13:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.773 10:13:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.773 10:13:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.773 10:13:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:45.773 10:13:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:45.773 10:13:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:45.773 10:13:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:45.773 10:13:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:45.773 10:13:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:45.773 10:13:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.773 10:13:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.773 10:13:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:45.773 10:13:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:45.773 10:13:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:45.773 10:13:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:45.773 10:13:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:45.773 10:13:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.773 10:13:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:45.773 10:13:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:45.773 10:13:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:45.773 10:13:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:45.773 10:13:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:45.773 10:13:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:45.773 Cannot find device "nvmf_tgt_br" 00:12:45.773 10:13:05 -- nvmf/common.sh@154 -- # true 00:12:45.773 10:13:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:45.773 Cannot find device "nvmf_tgt_br2" 00:12:45.773 10:13:05 -- nvmf/common.sh@155 -- # true 00:12:45.773 10:13:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:45.773 10:13:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:45.773 Cannot find device "nvmf_tgt_br" 00:12:45.773 10:13:05 -- nvmf/common.sh@157 -- # true 00:12:45.773 10:13:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:45.773 Cannot find device "nvmf_tgt_br2" 00:12:45.773 10:13:05 -- nvmf/common.sh@158 -- # true 00:12:45.773 10:13:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:45.773 10:13:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:45.773 10:13:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:45.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:45.773 10:13:05 -- nvmf/common.sh@161 -- # true 00:12:45.773 10:13:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:45.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:45.773 10:13:05 -- nvmf/common.sh@162 -- # true 00:12:45.773 10:13:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:45.773 10:13:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:45.773 10:13:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:45.773 10:13:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:46.032 10:13:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:46.032 10:13:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:46.032 10:13:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:46.032 10:13:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:46.032 10:13:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:46.032 10:13:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:46.032 10:13:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:46.032 10:13:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:46.032 10:13:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:46.032 10:13:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:46.032 10:13:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:46.032 10:13:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:46.032 10:13:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:46.032 10:13:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:46.032 10:13:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:46.032 10:13:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:46.032 10:13:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:46.032 10:13:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:46.032 10:13:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:46.032 10:13:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:46.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:12:46.032 00:12:46.032 --- 10.0.0.2 ping statistics --- 00:12:46.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.032 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:46.032 10:13:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:46.032 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:46.032 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:12:46.032 00:12:46.032 --- 10.0.0.3 ping statistics --- 00:12:46.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.032 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:46.032 10:13:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:46.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:12:46.032 00:12:46.032 --- 10.0.0.1 ping statistics --- 00:12:46.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.032 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:46.032 10:13:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.032 10:13:05 -- nvmf/common.sh@421 -- # return 0 00:12:46.032 10:13:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:46.032 10:13:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.032 10:13:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:46.032 10:13:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:46.032 10:13:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.032 10:13:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:46.032 10:13:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:46.032 10:13:05 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:46.032 10:13:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:46.032 10:13:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:46.032 10:13:05 -- common/autotest_common.sh@10 -- # set +x 00:12:46.032 10:13:05 -- nvmf/common.sh@469 -- # nvmfpid=77406 00:12:46.032 10:13:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.032 10:13:05 -- nvmf/common.sh@470 -- # waitforlisten 77406 00:12:46.032 10:13:05 -- common/autotest_common.sh@829 -- # '[' -z 77406 ']' 00:12:46.032 10:13:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.032 10:13:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:46.032 10:13:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.032 10:13:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:46.032 10:13:05 -- common/autotest_common.sh@10 -- # set +x 00:12:46.032 [2024-11-19 10:13:05.573643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:46.032 [2024-11-19 10:13:05.573740] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.291 [2024-11-19 10:13:05.709309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.291 [2024-11-19 10:13:05.745314] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:46.291 [2024-11-19 10:13:05.745641] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.291 [2024-11-19 10:13:05.745697] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.291 [2024-11-19 10:13:05.745902] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.291 [2024-11-19 10:13:05.746138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.291 [2024-11-19 10:13:05.746242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.291 [2024-11-19 10:13:05.746313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.291 [2024-11-19 10:13:05.746314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.292 10:13:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:47.292 10:13:06 -- common/autotest_common.sh@862 -- # return 0 00:12:47.292 10:13:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:47.292 10:13:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:47.292 10:13:06 -- common/autotest_common.sh@10 -- # set +x 00:12:47.292 10:13:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.292 10:13:06 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:47.293 10:13:06 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:47.293 10:13:06 -- target/multitarget.sh@21 -- # jq length 00:12:47.293 10:13:06 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:47.293 10:13:06 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:47.551 "nvmf_tgt_1" 00:12:47.551 10:13:06 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:47.551 "nvmf_tgt_2" 00:12:47.551 10:13:06 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:47.551 10:13:06 -- target/multitarget.sh@28 -- # jq length 00:12:47.810 10:13:07 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:47.810 10:13:07 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:47.810 true 00:12:47.810 10:13:07 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:48.069 true 00:12:48.069 10:13:07 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:48.069 10:13:07 -- target/multitarget.sh@35 -- # jq length 00:12:48.069 10:13:07 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:48.069 10:13:07 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:48.069 10:13:07 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:48.069 10:13:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:48.069 10:13:07 -- nvmf/common.sh@116 -- # sync 00:12:48.327 10:13:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:48.327 10:13:07 -- nvmf/common.sh@119 -- # set +e 00:12:48.327 10:13:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:48.327 10:13:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:48.327 rmmod nvme_tcp 00:12:48.327 rmmod nvme_fabrics 00:12:48.327 rmmod nvme_keyring 00:12:48.327 10:13:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:48.327 10:13:07 -- nvmf/common.sh@123 -- # set -e 00:12:48.327 10:13:07 -- nvmf/common.sh@124 -- # return 0 00:12:48.327 10:13:07 -- nvmf/common.sh@477 -- # '[' -n 77406 ']' 00:12:48.327 10:13:07 -- nvmf/common.sh@478 -- # killprocess 77406 00:12:48.327 10:13:07 -- common/autotest_common.sh@936 -- # '[' -z 77406 ']' 00:12:48.327 10:13:07 -- common/autotest_common.sh@940 -- # kill -0 77406 00:12:48.327 10:13:07 -- common/autotest_common.sh@941 -- # uname 00:12:48.327 10:13:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:48.327 10:13:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77406 00:12:48.327 killing process with pid 77406 00:12:48.327 10:13:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:48.327 10:13:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:48.327 10:13:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77406' 00:12:48.327 10:13:07 -- common/autotest_common.sh@955 -- # kill 77406 00:12:48.327 10:13:07 -- common/autotest_common.sh@960 -- # wait 77406 00:12:48.327 10:13:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:48.327 10:13:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:48.327 10:13:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:48.327 10:13:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.327 10:13:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:48.327 10:13:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.327 10:13:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.327 10:13:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.586 10:13:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:48.586 ************************************ 00:12:48.586 END TEST nvmf_multitarget 00:12:48.586 ************************************ 00:12:48.586 00:12:48.586 real 0m2.976s 00:12:48.586 user 0m9.851s 00:12:48.586 sys 0m0.630s 00:12:48.586 10:13:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:48.586 10:13:07 -- common/autotest_common.sh@10 -- # set +x 00:12:48.586 10:13:07 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:48.586 10:13:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:48.586 10:13:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:48.586 10:13:07 -- common/autotest_common.sh@10 -- # set +x 00:12:48.586 ************************************ 00:12:48.586 START TEST nvmf_rpc 00:12:48.586 ************************************ 00:12:48.586 10:13:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:48.586 * Looking for test storage... 00:12:48.586 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:48.586 10:13:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:48.586 10:13:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:48.586 10:13:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:48.845 10:13:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:48.845 10:13:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:48.845 10:13:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:48.845 10:13:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:48.845 10:13:08 -- scripts/common.sh@335 -- # IFS=.-: 00:12:48.845 10:13:08 -- scripts/common.sh@335 -- # read -ra ver1 00:12:48.845 10:13:08 -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.845 10:13:08 -- scripts/common.sh@336 -- # read -ra ver2 00:12:48.845 10:13:08 -- scripts/common.sh@337 -- # local 'op=<' 00:12:48.845 10:13:08 -- scripts/common.sh@339 -- # ver1_l=2 00:12:48.845 10:13:08 -- scripts/common.sh@340 -- # ver2_l=1 00:12:48.845 10:13:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:48.845 10:13:08 -- scripts/common.sh@343 -- # case "$op" in 00:12:48.845 10:13:08 -- scripts/common.sh@344 -- # : 1 00:12:48.845 10:13:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:48.845 10:13:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.845 10:13:08 -- scripts/common.sh@364 -- # decimal 1 00:12:48.845 10:13:08 -- scripts/common.sh@352 -- # local d=1 00:12:48.845 10:13:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.845 10:13:08 -- scripts/common.sh@354 -- # echo 1 00:12:48.845 10:13:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:48.845 10:13:08 -- scripts/common.sh@365 -- # decimal 2 00:12:48.845 10:13:08 -- scripts/common.sh@352 -- # local d=2 00:12:48.845 10:13:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.845 10:13:08 -- scripts/common.sh@354 -- # echo 2 00:12:48.845 10:13:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:48.845 10:13:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:48.845 10:13:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:48.845 10:13:08 -- scripts/common.sh@367 -- # return 0 00:12:48.845 10:13:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.845 10:13:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:48.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.845 --rc genhtml_branch_coverage=1 00:12:48.845 --rc genhtml_function_coverage=1 00:12:48.845 --rc genhtml_legend=1 00:12:48.845 --rc geninfo_all_blocks=1 00:12:48.845 --rc geninfo_unexecuted_blocks=1 00:12:48.845 00:12:48.845 ' 00:12:48.845 10:13:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:48.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.846 --rc genhtml_branch_coverage=1 00:12:48.846 --rc genhtml_function_coverage=1 00:12:48.846 --rc genhtml_legend=1 00:12:48.846 --rc geninfo_all_blocks=1 00:12:48.846 --rc geninfo_unexecuted_blocks=1 00:12:48.846 00:12:48.846 ' 00:12:48.846 10:13:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:48.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.846 --rc genhtml_branch_coverage=1 00:12:48.846 --rc genhtml_function_coverage=1 00:12:48.846 --rc genhtml_legend=1 00:12:48.846 --rc geninfo_all_blocks=1 00:12:48.846 --rc geninfo_unexecuted_blocks=1 00:12:48.846 00:12:48.846 ' 00:12:48.846 10:13:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:48.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.846 --rc genhtml_branch_coverage=1 00:12:48.846 --rc genhtml_function_coverage=1 00:12:48.846 --rc genhtml_legend=1 00:12:48.846 --rc geninfo_all_blocks=1 00:12:48.846 --rc geninfo_unexecuted_blocks=1 00:12:48.846 00:12:48.846 ' 00:12:48.846 10:13:08 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:48.846 10:13:08 -- nvmf/common.sh@7 -- # uname -s 00:12:48.846 10:13:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.846 10:13:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.846 10:13:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.846 10:13:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.846 10:13:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.846 10:13:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.846 10:13:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.846 10:13:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.846 10:13:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.846 10:13:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.846 10:13:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:12:48.846 10:13:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:12:48.846 10:13:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.846 10:13:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.846 10:13:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:48.846 10:13:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:48.846 10:13:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.846 10:13:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.846 10:13:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.846 10:13:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.846 10:13:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.846 10:13:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.846 10:13:08 -- paths/export.sh@5 -- # export PATH 00:12:48.846 10:13:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.846 10:13:08 -- nvmf/common.sh@46 -- # : 0 00:12:48.846 10:13:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:48.846 10:13:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:48.846 10:13:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:48.846 10:13:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.846 10:13:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.846 10:13:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:48.846 10:13:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:48.846 10:13:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:48.846 10:13:08 -- target/rpc.sh@11 -- # loops=5 00:12:48.846 10:13:08 -- target/rpc.sh@23 -- # nvmftestinit 00:12:48.846 10:13:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:48.846 10:13:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.846 10:13:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:48.846 10:13:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:48.846 10:13:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:48.846 10:13:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.846 10:13:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.846 10:13:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.846 10:13:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:48.846 10:13:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:48.846 10:13:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:48.846 10:13:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:48.846 10:13:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:48.846 10:13:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:48.846 10:13:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.846 10:13:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.846 10:13:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:48.846 10:13:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:48.846 10:13:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:48.846 10:13:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:48.846 10:13:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:48.846 10:13:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.846 10:13:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:48.846 10:13:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:48.846 10:13:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:48.846 10:13:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:48.846 10:13:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:48.846 10:13:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:48.846 Cannot find device "nvmf_tgt_br" 00:12:48.846 10:13:08 -- nvmf/common.sh@154 -- # true 00:12:48.846 10:13:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:48.846 Cannot find device "nvmf_tgt_br2" 00:12:48.846 10:13:08 -- nvmf/common.sh@155 -- # true 00:12:48.846 10:13:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:48.846 10:13:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:48.846 Cannot find device "nvmf_tgt_br" 00:12:48.846 10:13:08 -- nvmf/common.sh@157 -- # true 00:12:48.846 10:13:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:48.846 Cannot find device "nvmf_tgt_br2" 00:12:48.846 10:13:08 -- nvmf/common.sh@158 -- # true 00:12:48.846 10:13:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:48.846 10:13:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:48.846 10:13:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:48.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:48.846 10:13:08 -- nvmf/common.sh@161 -- # true 00:12:48.846 10:13:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:48.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:48.846 10:13:08 -- nvmf/common.sh@162 -- # true 00:12:48.846 10:13:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:48.846 10:13:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:48.846 10:13:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:48.846 10:13:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:48.846 10:13:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:48.846 10:13:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:49.105 10:13:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:49.105 10:13:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:49.105 10:13:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:49.105 10:13:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:49.105 10:13:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:49.105 10:13:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:49.105 10:13:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:49.105 10:13:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:49.105 10:13:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:49.105 10:13:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:49.105 10:13:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:49.105 10:13:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:49.105 10:13:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:49.105 10:13:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:49.105 10:13:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:49.105 10:13:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:49.105 10:13:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:49.105 10:13:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:49.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:12:49.105 00:12:49.105 --- 10.0.0.2 ping statistics --- 00:12:49.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.105 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:49.105 10:13:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:49.105 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:49.105 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:12:49.105 00:12:49.105 --- 10.0.0.3 ping statistics --- 00:12:49.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.105 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:12:49.105 10:13:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:49.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:12:49.105 00:12:49.105 --- 10.0.0.1 ping statistics --- 00:12:49.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.105 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:12:49.105 10:13:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.105 10:13:08 -- nvmf/common.sh@421 -- # return 0 00:12:49.105 10:13:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:49.105 10:13:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.105 10:13:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:49.105 10:13:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:49.105 10:13:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.105 10:13:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:49.105 10:13:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:49.105 10:13:08 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:49.105 10:13:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:49.105 10:13:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:49.105 10:13:08 -- common/autotest_common.sh@10 -- # set +x 00:12:49.105 10:13:08 -- nvmf/common.sh@469 -- # nvmfpid=77646 00:12:49.105 10:13:08 -- nvmf/common.sh@470 -- # waitforlisten 77646 00:12:49.105 10:13:08 -- common/autotest_common.sh@829 -- # '[' -z 77646 ']' 00:12:49.106 10:13:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:49.106 10:13:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.106 10:13:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:49.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.106 10:13:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.106 10:13:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:49.106 10:13:08 -- common/autotest_common.sh@10 -- # set +x 00:12:49.106 [2024-11-19 10:13:08.583940] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:49.106 [2024-11-19 10:13:08.584038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.364 [2024-11-19 10:13:08.726388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.364 [2024-11-19 10:13:08.766359] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:49.364 [2024-11-19 10:13:08.766725] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.364 [2024-11-19 10:13:08.766792] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.364 [2024-11-19 10:13:08.767159] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.364 [2024-11-19 10:13:08.767352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.364 [2024-11-19 10:13:08.767401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.364 [2024-11-19 10:13:08.767977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.364 [2024-11-19 10:13:08.767985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.364 10:13:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:49.364 10:13:08 -- common/autotest_common.sh@862 -- # return 0 00:12:49.364 10:13:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:49.364 10:13:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:49.364 10:13:08 -- common/autotest_common.sh@10 -- # set +x 00:12:49.365 10:13:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.365 10:13:08 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:49.365 10:13:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.365 10:13:08 -- common/autotest_common.sh@10 -- # set +x 00:12:49.623 10:13:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.623 10:13:08 -- target/rpc.sh@26 -- # stats='{ 00:12:49.623 "poll_groups": [ 00:12:49.623 { 00:12:49.623 "admin_qpairs": 0, 00:12:49.623 "completed_nvme_io": 0, 00:12:49.623 "current_admin_qpairs": 0, 00:12:49.623 "current_io_qpairs": 0, 00:12:49.623 "io_qpairs": 0, 00:12:49.623 "name": "nvmf_tgt_poll_group_0", 00:12:49.623 "pending_bdev_io": 0, 00:12:49.623 "transports": [] 00:12:49.623 }, 00:12:49.623 { 00:12:49.623 "admin_qpairs": 0, 00:12:49.623 "completed_nvme_io": 0, 00:12:49.623 "current_admin_qpairs": 0, 00:12:49.623 "current_io_qpairs": 0, 00:12:49.623 "io_qpairs": 0, 00:12:49.623 "name": "nvmf_tgt_poll_group_1", 00:12:49.623 "pending_bdev_io": 0, 00:12:49.623 "transports": [] 00:12:49.623 }, 00:12:49.623 { 00:12:49.623 "admin_qpairs": 0, 00:12:49.623 "completed_nvme_io": 0, 00:12:49.623 "current_admin_qpairs": 0, 00:12:49.623 "current_io_qpairs": 0, 00:12:49.623 "io_qpairs": 0, 00:12:49.623 "name": "nvmf_tgt_poll_group_2", 00:12:49.623 "pending_bdev_io": 0, 00:12:49.623 "transports": [] 00:12:49.623 }, 00:12:49.623 { 00:12:49.623 "admin_qpairs": 0, 00:12:49.623 "completed_nvme_io": 0, 00:12:49.623 "current_admin_qpairs": 0, 00:12:49.623 "current_io_qpairs": 0, 00:12:49.623 "io_qpairs": 0, 00:12:49.623 "name": "nvmf_tgt_poll_group_3", 00:12:49.623 "pending_bdev_io": 0, 00:12:49.623 "transports": [] 00:12:49.623 } 00:12:49.623 ], 00:12:49.623 "tick_rate": 2200000000 00:12:49.623 }' 00:12:49.623 10:13:08 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:49.623 10:13:08 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:49.623 10:13:08 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:49.623 10:13:08 -- target/rpc.sh@15 -- # wc -l 00:12:49.623 10:13:08 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:49.623 10:13:08 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:49.623 10:13:09 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:49.624 10:13:09 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:49.624 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.624 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:12:49.624 [2024-11-19 10:13:09.035736] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:49.624 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.624 10:13:09 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:49.624 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.624 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:12:49.624 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.624 10:13:09 -- target/rpc.sh@33 -- # stats='{ 00:12:49.624 "poll_groups": [ 00:12:49.624 { 00:12:49.624 "admin_qpairs": 0, 00:12:49.624 "completed_nvme_io": 0, 00:12:49.624 "current_admin_qpairs": 0, 00:12:49.624 "current_io_qpairs": 0, 00:12:49.624 "io_qpairs": 0, 00:12:49.624 "name": "nvmf_tgt_poll_group_0", 00:12:49.624 "pending_bdev_io": 0, 00:12:49.624 "transports": [ 00:12:49.624 { 00:12:49.624 "trtype": "TCP" 00:12:49.624 } 00:12:49.624 ] 00:12:49.624 }, 00:12:49.624 { 00:12:49.624 "admin_qpairs": 0, 00:12:49.624 "completed_nvme_io": 0, 00:12:49.624 "current_admin_qpairs": 0, 00:12:49.624 "current_io_qpairs": 0, 00:12:49.624 "io_qpairs": 0, 00:12:49.624 "name": "nvmf_tgt_poll_group_1", 00:12:49.624 "pending_bdev_io": 0, 00:12:49.624 "transports": [ 00:12:49.624 { 00:12:49.624 "trtype": "TCP" 00:12:49.624 } 00:12:49.624 ] 00:12:49.624 }, 00:12:49.624 { 00:12:49.624 "admin_qpairs": 0, 00:12:49.624 "completed_nvme_io": 0, 00:12:49.624 "current_admin_qpairs": 0, 00:12:49.624 "current_io_qpairs": 0, 00:12:49.624 "io_qpairs": 0, 00:12:49.624 "name": "nvmf_tgt_poll_group_2", 00:12:49.624 "pending_bdev_io": 0, 00:12:49.624 "transports": [ 00:12:49.624 { 00:12:49.624 "trtype": "TCP" 00:12:49.624 } 00:12:49.624 ] 00:12:49.624 }, 00:12:49.624 { 00:12:49.624 "admin_qpairs": 0, 00:12:49.624 "completed_nvme_io": 0, 00:12:49.624 "current_admin_qpairs": 0, 00:12:49.624 "current_io_qpairs": 0, 00:12:49.624 "io_qpairs": 0, 00:12:49.624 "name": "nvmf_tgt_poll_group_3", 00:12:49.624 "pending_bdev_io": 0, 00:12:49.624 "transports": [ 00:12:49.624 { 00:12:49.624 "trtype": "TCP" 00:12:49.624 } 00:12:49.624 ] 00:12:49.624 } 00:12:49.624 ], 00:12:49.624 "tick_rate": 2200000000 00:12:49.624 }' 00:12:49.624 10:13:09 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:49.624 10:13:09 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:49.624 10:13:09 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:49.624 10:13:09 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.624 10:13:09 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:49.624 10:13:09 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:49.624 10:13:09 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:49.624 10:13:09 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:49.624 10:13:09 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:49.881 10:13:09 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:49.881 10:13:09 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:49.881 10:13:09 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:49.881 10:13:09 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:49.881 10:13:09 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:49.881 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.881 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:12:49.881 Malloc1 00:12:49.881 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.881 10:13:09 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:49.881 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.881 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:12:49.881 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.881 10:13:09 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:49.881 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.881 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:12:49.881 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.881 10:13:09 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:49.881 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.881 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:12:49.881 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.881 10:13:09 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.881 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.881 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:12:49.881 [2024-11-19 10:13:09.225152] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.881 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.881 10:13:09 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a -a 10.0.0.2 -s 4420 00:12:49.881 10:13:09 -- common/autotest_common.sh@650 -- # local es=0 00:12:49.881 10:13:09 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a -a 10.0.0.2 -s 4420 00:12:49.881 10:13:09 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:49.881 10:13:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.881 10:13:09 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:49.881 10:13:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.881 10:13:09 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:49.881 10:13:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.881 10:13:09 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:49.881 10:13:09 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:49.881 10:13:09 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a -a 10.0.0.2 -s 4420 00:12:49.881 [2024-11-19 10:13:09.249402] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a' 00:12:49.881 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:49.881 could not add new controller: failed to write to nvme-fabrics device 00:12:49.881 10:13:09 -- common/autotest_common.sh@653 -- # es=1 00:12:49.881 10:13:09 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:49.881 10:13:09 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:49.881 10:13:09 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:49.881 10:13:09 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:12:49.881 10:13:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.881 10:13:09 -- common/autotest_common.sh@10 -- # set +x 00:12:49.881 10:13:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.881 10:13:09 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.138 10:13:09 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.138 10:13:09 -- common/autotest_common.sh@1187 -- # local i=0 00:12:50.138 10:13:09 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.138 10:13:09 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:50.138 10:13:09 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:52.039 10:13:11 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:52.039 10:13:11 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:52.039 10:13:11 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.039 10:13:11 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:52.039 10:13:11 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.039 10:13:11 -- common/autotest_common.sh@1197 -- # return 0 00:12:52.039 10:13:11 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.039 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.039 10:13:11 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.039 10:13:11 -- common/autotest_common.sh@1208 -- # local i=0 00:12:52.039 10:13:11 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:52.039 10:13:11 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.039 10:13:11 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:52.039 10:13:11 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.039 10:13:11 -- common/autotest_common.sh@1220 -- # return 0 00:12:52.039 10:13:11 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:12:52.039 10:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.039 10:13:11 -- common/autotest_common.sh@10 -- # set +x 00:12:52.039 10:13:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.039 10:13:11 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.039 10:13:11 -- common/autotest_common.sh@650 -- # local es=0 00:12:52.039 10:13:11 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.039 10:13:11 -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:52.039 10:13:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.039 10:13:11 -- common/autotest_common.sh@642 -- # type -t nvme 00:12:52.039 10:13:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.039 10:13:11 -- common/autotest_common.sh@644 -- # type -P nvme 00:12:52.039 10:13:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.039 10:13:11 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:52.039 10:13:11 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:52.039 10:13:11 -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.039 [2024-11-19 10:13:11.560845] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a' 00:12:52.039 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:52.039 could not add new controller: failed to write to nvme-fabrics device 00:12:52.039 10:13:11 -- common/autotest_common.sh@653 -- # es=1 00:12:52.039 10:13:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:52.039 10:13:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:52.039 10:13:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:52.039 10:13:11 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:52.039 10:13:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.039 10:13:11 -- common/autotest_common.sh@10 -- # set +x 00:12:52.039 10:13:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.039 10:13:11 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.297 10:13:11 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.297 10:13:11 -- common/autotest_common.sh@1187 -- # local i=0 00:12:52.297 10:13:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.297 10:13:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:52.297 10:13:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:54.828 10:13:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:54.828 10:13:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:54.828 10:13:13 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.828 10:13:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:54.828 10:13:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.828 10:13:13 -- common/autotest_common.sh@1197 -- # return 0 00:12:54.828 10:13:13 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.828 10:13:13 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.828 10:13:13 -- common/autotest_common.sh@1208 -- # local i=0 00:12:54.828 10:13:13 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.828 10:13:13 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:54.828 10:13:13 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:54.828 10:13:13 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.828 10:13:13 -- common/autotest_common.sh@1220 -- # return 0 00:12:54.828 10:13:13 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.828 10:13:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.828 10:13:13 -- common/autotest_common.sh@10 -- # set +x 00:12:54.828 10:13:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.828 10:13:13 -- target/rpc.sh@81 -- # seq 1 5 00:12:54.828 10:13:13 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:54.828 10:13:13 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.828 10:13:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.828 10:13:13 -- common/autotest_common.sh@10 -- # set +x 00:12:54.828 10:13:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.828 10:13:13 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.828 10:13:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.828 10:13:13 -- common/autotest_common.sh@10 -- # set +x 00:12:54.828 [2024-11-19 10:13:13.848593] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.828 10:13:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.828 10:13:13 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:54.828 10:13:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.828 10:13:13 -- common/autotest_common.sh@10 -- # set +x 00:12:54.828 10:13:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.828 10:13:13 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.828 10:13:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.828 10:13:13 -- common/autotest_common.sh@10 -- # set +x 00:12:54.828 10:13:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.828 10:13:13 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.828 10:13:14 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:54.828 10:13:14 -- common/autotest_common.sh@1187 -- # local i=0 00:12:54.828 10:13:14 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.828 10:13:14 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:54.828 10:13:14 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:56.730 10:13:16 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:56.730 10:13:16 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:56.730 10:13:16 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.730 10:13:16 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:56.730 10:13:16 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.730 10:13:16 -- common/autotest_common.sh@1197 -- # return 0 00:12:56.730 10:13:16 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.730 10:13:16 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.730 10:13:16 -- common/autotest_common.sh@1208 -- # local i=0 00:12:56.730 10:13:16 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:56.730 10:13:16 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.730 10:13:16 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:56.730 10:13:16 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.730 10:13:16 -- common/autotest_common.sh@1220 -- # return 0 00:12:56.730 10:13:16 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:56.730 10:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.730 10:13:16 -- common/autotest_common.sh@10 -- # set +x 00:12:56.730 10:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.730 10:13:16 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.730 10:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.730 10:13:16 -- common/autotest_common.sh@10 -- # set +x 00:12:56.730 10:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.730 10:13:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:56.730 10:13:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:56.730 10:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.730 10:13:16 -- common/autotest_common.sh@10 -- # set +x 00:12:56.730 10:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.730 10:13:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.730 10:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.730 10:13:16 -- common/autotest_common.sh@10 -- # set +x 00:12:56.730 [2024-11-19 10:13:16.147734] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.730 10:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.730 10:13:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:56.730 10:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.730 10:13:16 -- common/autotest_common.sh@10 -- # set +x 00:12:56.730 10:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.730 10:13:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:56.730 10:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.730 10:13:16 -- common/autotest_common.sh@10 -- # set +x 00:12:56.730 10:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.731 10:13:16 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.989 10:13:16 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.989 10:13:16 -- common/autotest_common.sh@1187 -- # local i=0 00:12:56.989 10:13:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.989 10:13:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:56.989 10:13:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:58.918 10:13:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:58.918 10:13:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:58.918 10:13:18 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.918 10:13:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:12:58.918 10:13:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.918 10:13:18 -- common/autotest_common.sh@1197 -- # return 0 00:12:58.918 10:13:18 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.918 10:13:18 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.918 10:13:18 -- common/autotest_common.sh@1208 -- # local i=0 00:12:58.918 10:13:18 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:58.918 10:13:18 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.919 10:13:18 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:58.919 10:13:18 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.919 10:13:18 -- common/autotest_common.sh@1220 -- # return 0 00:12:58.919 10:13:18 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.919 10:13:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.919 10:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:58.919 10:13:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.919 10:13:18 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.919 10:13:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.919 10:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:58.919 10:13:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.919 10:13:18 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:58.919 10:13:18 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.919 10:13:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.919 10:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:58.919 10:13:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.919 10:13:18 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.919 10:13:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.919 10:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:58.919 [2024-11-19 10:13:18.451038] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.919 10:13:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.919 10:13:18 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:58.919 10:13:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.919 10:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:59.177 10:13:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.177 10:13:18 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:59.177 10:13:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.177 10:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:59.177 10:13:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.177 10:13:18 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.177 10:13:18 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.177 10:13:18 -- common/autotest_common.sh@1187 -- # local i=0 00:12:59.177 10:13:18 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.177 10:13:18 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:12:59.177 10:13:18 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:01.708 10:13:20 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:01.708 10:13:20 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:01.708 10:13:20 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.708 10:13:20 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:01.708 10:13:20 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.708 10:13:20 -- common/autotest_common.sh@1197 -- # return 0 00:13:01.708 10:13:20 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.708 10:13:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.708 10:13:20 -- common/autotest_common.sh@1208 -- # local i=0 00:13:01.708 10:13:20 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:01.708 10:13:20 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.708 10:13:20 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:01.708 10:13:20 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.708 10:13:20 -- common/autotest_common.sh@1220 -- # return 0 00:13:01.708 10:13:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.708 10:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.708 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:13:01.708 10:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.708 10:13:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.708 10:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.708 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:13:01.708 10:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.708 10:13:20 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.708 10:13:20 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.708 10:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.708 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:13:01.708 10:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.708 10:13:20 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.708 10:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.708 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:13:01.708 [2024-11-19 10:13:20.754350] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.708 10:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.708 10:13:20 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.708 10:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.708 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:13:01.708 10:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.708 10:13:20 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.708 10:13:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.708 10:13:20 -- common/autotest_common.sh@10 -- # set +x 00:13:01.708 10:13:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.708 10:13:20 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.708 10:13:20 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.708 10:13:20 -- common/autotest_common.sh@1187 -- # local i=0 00:13:01.708 10:13:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.708 10:13:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:01.708 10:13:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:03.648 10:13:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:03.648 10:13:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:03.648 10:13:22 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.648 10:13:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:03.648 10:13:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.648 10:13:22 -- common/autotest_common.sh@1197 -- # return 0 00:13:03.648 10:13:22 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.648 10:13:22 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.648 10:13:22 -- common/autotest_common.sh@1208 -- # local i=0 00:13:03.648 10:13:23 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:03.648 10:13:23 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.648 10:13:23 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.648 10:13:23 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:03.648 10:13:23 -- common/autotest_common.sh@1220 -- # return 0 00:13:03.648 10:13:23 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.648 10:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.648 10:13:23 -- common/autotest_common.sh@10 -- # set +x 00:13:03.648 10:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.648 10:13:23 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.648 10:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.648 10:13:23 -- common/autotest_common.sh@10 -- # set +x 00:13:03.648 10:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.648 10:13:23 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.648 10:13:23 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.648 10:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.648 10:13:23 -- common/autotest_common.sh@10 -- # set +x 00:13:03.648 10:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.648 10:13:23 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.648 10:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.648 10:13:23 -- common/autotest_common.sh@10 -- # set +x 00:13:03.648 [2024-11-19 10:13:23.053731] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.648 10:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.648 10:13:23 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.648 10:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.648 10:13:23 -- common/autotest_common.sh@10 -- # set +x 00:13:03.648 10:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.648 10:13:23 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.648 10:13:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.648 10:13:23 -- common/autotest_common.sh@10 -- # set +x 00:13:03.648 10:13:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.648 10:13:23 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.922 10:13:23 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.922 10:13:23 -- common/autotest_common.sh@1187 -- # local i=0 00:13:03.922 10:13:23 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.922 10:13:23 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:03.922 10:13:23 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:05.823 10:13:25 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:05.823 10:13:25 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:05.823 10:13:25 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.823 10:13:25 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:05.823 10:13:25 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.823 10:13:25 -- common/autotest_common.sh@1197 -- # return 0 00:13:05.823 10:13:25 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.823 10:13:25 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.823 10:13:25 -- common/autotest_common.sh@1208 -- # local i=0 00:13:05.823 10:13:25 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.823 10:13:25 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:13:05.823 10:13:25 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.823 10:13:25 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:13:05.823 10:13:25 -- common/autotest_common.sh@1220 -- # return 0 00:13:05.823 10:13:25 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.824 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.824 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:05.824 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.824 10:13:25 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.824 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.824 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:05.824 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.824 10:13:25 -- target/rpc.sh@99 -- # seq 1 5 00:13:05.824 10:13:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:05.824 10:13:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.824 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.824 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:05.824 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.824 10:13:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.824 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.824 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:05.824 [2024-11-19 10:13:25.356926] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.824 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.824 10:13:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:05.824 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.824 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.082 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.082 10:13:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.082 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.082 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.082 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.082 10:13:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.082 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.082 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.082 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:06.083 10:13:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 [2024-11-19 10:13:25.404970] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:06.083 10:13:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 [2024-11-19 10:13:25.457032] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:06.083 10:13:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 [2024-11-19 10:13:25.505063] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:06.083 10:13:25 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 [2024-11-19 10:13:25.553103] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:06.083 10:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.083 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:13:06.083 10:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.083 10:13:25 -- target/rpc.sh@110 -- # stats='{ 00:13:06.083 "poll_groups": [ 00:13:06.083 { 00:13:06.083 "admin_qpairs": 2, 00:13:06.083 "completed_nvme_io": 68, 00:13:06.083 "current_admin_qpairs": 0, 00:13:06.083 "current_io_qpairs": 0, 00:13:06.083 "io_qpairs": 16, 00:13:06.083 "name": "nvmf_tgt_poll_group_0", 00:13:06.083 "pending_bdev_io": 0, 00:13:06.083 "transports": [ 00:13:06.083 { 00:13:06.083 "trtype": "TCP" 00:13:06.083 } 00:13:06.083 ] 00:13:06.083 }, 00:13:06.083 { 00:13:06.083 "admin_qpairs": 3, 00:13:06.083 "completed_nvme_io": 69, 00:13:06.083 "current_admin_qpairs": 0, 00:13:06.083 "current_io_qpairs": 0, 00:13:06.083 "io_qpairs": 17, 00:13:06.083 "name": "nvmf_tgt_poll_group_1", 00:13:06.083 "pending_bdev_io": 0, 00:13:06.083 "transports": [ 00:13:06.083 { 00:13:06.083 "trtype": "TCP" 00:13:06.083 } 00:13:06.083 ] 00:13:06.083 }, 00:13:06.083 { 00:13:06.083 "admin_qpairs": 1, 00:13:06.083 "completed_nvme_io": 118, 00:13:06.083 "current_admin_qpairs": 0, 00:13:06.083 "current_io_qpairs": 0, 00:13:06.083 "io_qpairs": 19, 00:13:06.083 "name": "nvmf_tgt_poll_group_2", 00:13:06.083 "pending_bdev_io": 0, 00:13:06.083 "transports": [ 00:13:06.083 { 00:13:06.083 "trtype": "TCP" 00:13:06.083 } 00:13:06.083 ] 00:13:06.083 }, 00:13:06.083 { 00:13:06.083 "admin_qpairs": 1, 00:13:06.083 "completed_nvme_io": 165, 00:13:06.083 "current_admin_qpairs": 0, 00:13:06.083 "current_io_qpairs": 0, 00:13:06.083 "io_qpairs": 18, 00:13:06.083 "name": "nvmf_tgt_poll_group_3", 00:13:06.083 "pending_bdev_io": 0, 00:13:06.084 "transports": [ 00:13:06.084 { 00:13:06.084 "trtype": "TCP" 00:13:06.084 } 00:13:06.084 ] 00:13:06.084 } 00:13:06.084 ], 00:13:06.084 "tick_rate": 2200000000 00:13:06.084 }' 00:13:06.084 10:13:25 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:06.084 10:13:25 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:06.084 10:13:25 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:06.084 10:13:25 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:06.342 10:13:25 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:06.342 10:13:25 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:06.342 10:13:25 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:06.342 10:13:25 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:06.342 10:13:25 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:06.342 10:13:25 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:13:06.342 10:13:25 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:06.342 10:13:25 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:06.342 10:13:25 -- target/rpc.sh@123 -- # nvmftestfini 00:13:06.342 10:13:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:06.342 10:13:25 -- nvmf/common.sh@116 -- # sync 00:13:06.342 10:13:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:06.342 10:13:25 -- nvmf/common.sh@119 -- # set +e 00:13:06.342 10:13:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:06.342 10:13:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:06.342 rmmod nvme_tcp 00:13:06.342 rmmod nvme_fabrics 00:13:06.342 rmmod nvme_keyring 00:13:06.342 10:13:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:06.342 10:13:25 -- nvmf/common.sh@123 -- # set -e 00:13:06.342 10:13:25 -- nvmf/common.sh@124 -- # return 0 00:13:06.342 10:13:25 -- nvmf/common.sh@477 -- # '[' -n 77646 ']' 00:13:06.342 10:13:25 -- nvmf/common.sh@478 -- # killprocess 77646 00:13:06.342 10:13:25 -- common/autotest_common.sh@936 -- # '[' -z 77646 ']' 00:13:06.342 10:13:25 -- common/autotest_common.sh@940 -- # kill -0 77646 00:13:06.342 10:13:25 -- common/autotest_common.sh@941 -- # uname 00:13:06.342 10:13:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:06.342 10:13:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77646 00:13:06.342 killing process with pid 77646 00:13:06.342 10:13:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:06.342 10:13:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:06.342 10:13:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77646' 00:13:06.342 10:13:25 -- common/autotest_common.sh@955 -- # kill 77646 00:13:06.342 10:13:25 -- common/autotest_common.sh@960 -- # wait 77646 00:13:06.601 10:13:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:06.601 10:13:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:06.601 10:13:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:06.601 10:13:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.601 10:13:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:06.601 10:13:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.601 10:13:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.601 10:13:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.601 10:13:26 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:06.601 ************************************ 00:13:06.601 END TEST nvmf_rpc 00:13:06.601 ************************************ 00:13:06.601 00:13:06.601 real 0m18.075s 00:13:06.601 user 1m7.388s 00:13:06.601 sys 0m2.591s 00:13:06.601 10:13:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:06.601 10:13:26 -- common/autotest_common.sh@10 -- # set +x 00:13:06.601 10:13:26 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:06.601 10:13:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:06.601 10:13:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:06.601 10:13:26 -- common/autotest_common.sh@10 -- # set +x 00:13:06.601 ************************************ 00:13:06.601 START TEST nvmf_invalid 00:13:06.601 ************************************ 00:13:06.601 10:13:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:06.601 * Looking for test storage... 00:13:06.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:06.601 10:13:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:06.601 10:13:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:06.601 10:13:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:06.860 10:13:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:06.860 10:13:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:06.860 10:13:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:06.860 10:13:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:06.860 10:13:26 -- scripts/common.sh@335 -- # IFS=.-: 00:13:06.860 10:13:26 -- scripts/common.sh@335 -- # read -ra ver1 00:13:06.860 10:13:26 -- scripts/common.sh@336 -- # IFS=.-: 00:13:06.860 10:13:26 -- scripts/common.sh@336 -- # read -ra ver2 00:13:06.860 10:13:26 -- scripts/common.sh@337 -- # local 'op=<' 00:13:06.860 10:13:26 -- scripts/common.sh@339 -- # ver1_l=2 00:13:06.860 10:13:26 -- scripts/common.sh@340 -- # ver2_l=1 00:13:06.860 10:13:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:06.860 10:13:26 -- scripts/common.sh@343 -- # case "$op" in 00:13:06.860 10:13:26 -- scripts/common.sh@344 -- # : 1 00:13:06.860 10:13:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:06.860 10:13:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:06.860 10:13:26 -- scripts/common.sh@364 -- # decimal 1 00:13:06.860 10:13:26 -- scripts/common.sh@352 -- # local d=1 00:13:06.860 10:13:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:06.860 10:13:26 -- scripts/common.sh@354 -- # echo 1 00:13:06.860 10:13:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:06.860 10:13:26 -- scripts/common.sh@365 -- # decimal 2 00:13:06.860 10:13:26 -- scripts/common.sh@352 -- # local d=2 00:13:06.860 10:13:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:06.860 10:13:26 -- scripts/common.sh@354 -- # echo 2 00:13:06.860 10:13:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:06.860 10:13:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:06.860 10:13:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:06.860 10:13:26 -- scripts/common.sh@367 -- # return 0 00:13:06.860 10:13:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:06.860 10:13:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:06.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.860 --rc genhtml_branch_coverage=1 00:13:06.860 --rc genhtml_function_coverage=1 00:13:06.860 --rc genhtml_legend=1 00:13:06.860 --rc geninfo_all_blocks=1 00:13:06.860 --rc geninfo_unexecuted_blocks=1 00:13:06.860 00:13:06.860 ' 00:13:06.860 10:13:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:06.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.860 --rc genhtml_branch_coverage=1 00:13:06.860 --rc genhtml_function_coverage=1 00:13:06.860 --rc genhtml_legend=1 00:13:06.860 --rc geninfo_all_blocks=1 00:13:06.860 --rc geninfo_unexecuted_blocks=1 00:13:06.860 00:13:06.860 ' 00:13:06.860 10:13:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:06.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.860 --rc genhtml_branch_coverage=1 00:13:06.860 --rc genhtml_function_coverage=1 00:13:06.860 --rc genhtml_legend=1 00:13:06.860 --rc geninfo_all_blocks=1 00:13:06.860 --rc geninfo_unexecuted_blocks=1 00:13:06.860 00:13:06.860 ' 00:13:06.860 10:13:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:06.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.860 --rc genhtml_branch_coverage=1 00:13:06.860 --rc genhtml_function_coverage=1 00:13:06.860 --rc genhtml_legend=1 00:13:06.860 --rc geninfo_all_blocks=1 00:13:06.860 --rc geninfo_unexecuted_blocks=1 00:13:06.860 00:13:06.860 ' 00:13:06.860 10:13:26 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:06.860 10:13:26 -- nvmf/common.sh@7 -- # uname -s 00:13:06.860 10:13:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.860 10:13:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.860 10:13:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.860 10:13:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.860 10:13:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.860 10:13:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.860 10:13:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.860 10:13:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.860 10:13:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.860 10:13:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.860 10:13:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:13:06.860 10:13:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:13:06.860 10:13:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.860 10:13:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.860 10:13:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:06.860 10:13:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:06.860 10:13:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.860 10:13:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.860 10:13:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.861 10:13:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.861 10:13:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.861 10:13:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.861 10:13:26 -- paths/export.sh@5 -- # export PATH 00:13:06.861 10:13:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.861 10:13:26 -- nvmf/common.sh@46 -- # : 0 00:13:06.861 10:13:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:06.861 10:13:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:06.861 10:13:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:06.861 10:13:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.861 10:13:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.861 10:13:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:06.861 10:13:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:06.861 10:13:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:06.861 10:13:26 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:13:06.861 10:13:26 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:06.861 10:13:26 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:06.861 10:13:26 -- target/invalid.sh@14 -- # target=foobar 00:13:06.861 10:13:26 -- target/invalid.sh@16 -- # RANDOM=0 00:13:06.861 10:13:26 -- target/invalid.sh@34 -- # nvmftestinit 00:13:06.861 10:13:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:06.861 10:13:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.861 10:13:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:06.861 10:13:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:06.861 10:13:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:06.861 10:13:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.861 10:13:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.861 10:13:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.861 10:13:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:06.861 10:13:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:06.861 10:13:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:06.861 10:13:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:06.861 10:13:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:06.861 10:13:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:06.861 10:13:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.861 10:13:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.861 10:13:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:06.861 10:13:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:06.861 10:13:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:06.861 10:13:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:06.861 10:13:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:06.861 10:13:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.861 10:13:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:06.861 10:13:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:06.861 10:13:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:06.861 10:13:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:06.861 10:13:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:06.861 10:13:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:06.861 Cannot find device "nvmf_tgt_br" 00:13:06.861 10:13:26 -- nvmf/common.sh@154 -- # true 00:13:06.861 10:13:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:06.861 Cannot find device "nvmf_tgt_br2" 00:13:06.861 10:13:26 -- nvmf/common.sh@155 -- # true 00:13:06.861 10:13:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:06.861 10:13:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:06.861 Cannot find device "nvmf_tgt_br" 00:13:06.861 10:13:26 -- nvmf/common.sh@157 -- # true 00:13:06.861 10:13:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:06.861 Cannot find device "nvmf_tgt_br2" 00:13:06.861 10:13:26 -- nvmf/common.sh@158 -- # true 00:13:06.861 10:13:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:06.861 10:13:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:06.861 10:13:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:06.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:06.861 10:13:26 -- nvmf/common.sh@161 -- # true 00:13:06.861 10:13:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:06.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:06.861 10:13:26 -- nvmf/common.sh@162 -- # true 00:13:06.861 10:13:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:06.861 10:13:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:07.120 10:13:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:07.120 10:13:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:07.120 10:13:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:07.120 10:13:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:07.120 10:13:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:07.120 10:13:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:07.120 10:13:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:07.120 10:13:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:07.120 10:13:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:07.120 10:13:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:07.120 10:13:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:07.120 10:13:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:07.120 10:13:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:07.120 10:13:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:07.120 10:13:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:07.120 10:13:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:07.120 10:13:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:07.120 10:13:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:07.120 10:13:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:07.120 10:13:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:07.120 10:13:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:07.120 10:13:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:07.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:13:07.120 00:13:07.120 --- 10.0.0.2 ping statistics --- 00:13:07.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.120 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:13:07.120 10:13:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:07.120 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:07.120 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:13:07.120 00:13:07.120 --- 10.0.0.3 ping statistics --- 00:13:07.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.120 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:07.120 10:13:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:07.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:13:07.120 00:13:07.120 --- 10.0.0.1 ping statistics --- 00:13:07.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.120 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:07.120 10:13:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.120 10:13:26 -- nvmf/common.sh@421 -- # return 0 00:13:07.120 10:13:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:07.120 10:13:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.120 10:13:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:07.120 10:13:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:07.120 10:13:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.120 10:13:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:07.120 10:13:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:07.120 10:13:26 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:07.120 10:13:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:07.120 10:13:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:07.120 10:13:26 -- common/autotest_common.sh@10 -- # set +x 00:13:07.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.120 10:13:26 -- nvmf/common.sh@469 -- # nvmfpid=78146 00:13:07.120 10:13:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:07.120 10:13:26 -- nvmf/common.sh@470 -- # waitforlisten 78146 00:13:07.120 10:13:26 -- common/autotest_common.sh@829 -- # '[' -z 78146 ']' 00:13:07.120 10:13:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.120 10:13:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:07.120 10:13:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.120 10:13:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:07.120 10:13:26 -- common/autotest_common.sh@10 -- # set +x 00:13:07.379 [2024-11-19 10:13:26.676287] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:07.379 [2024-11-19 10:13:26.676651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.379 [2024-11-19 10:13:26.827685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:07.379 [2024-11-19 10:13:26.875287] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:07.379 [2024-11-19 10:13:26.875710] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.379 [2024-11-19 10:13:26.875925] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.379 [2024-11-19 10:13:26.876149] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.379 [2024-11-19 10:13:26.876421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.379 [2024-11-19 10:13:26.876518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.379 [2024-11-19 10:13:26.876995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:07.379 [2024-11-19 10:13:26.877016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.315 10:13:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:08.315 10:13:27 -- common/autotest_common.sh@862 -- # return 0 00:13:08.315 10:13:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:08.315 10:13:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:08.315 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:13:08.315 10:13:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:08.315 10:13:27 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:08.316 10:13:27 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11401 00:13:08.574 [2024-11-19 10:13:28.034499] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:08.574 10:13:28 -- target/invalid.sh@40 -- # out='2024/11/19 10:13:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11401 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:13:08.574 request: 00:13:08.574 { 00:13:08.574 "method": "nvmf_create_subsystem", 00:13:08.574 "params": { 00:13:08.574 "nqn": "nqn.2016-06.io.spdk:cnode11401", 00:13:08.574 "tgt_name": "foobar" 00:13:08.574 } 00:13:08.574 } 00:13:08.574 Got JSON-RPC error response 00:13:08.574 GoRPCClient: error on JSON-RPC call' 00:13:08.574 10:13:28 -- target/invalid.sh@41 -- # [[ 2024/11/19 10:13:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode11401 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:13:08.574 request: 00:13:08.574 { 00:13:08.574 "method": "nvmf_create_subsystem", 00:13:08.574 "params": { 00:13:08.574 "nqn": "nqn.2016-06.io.spdk:cnode11401", 00:13:08.574 "tgt_name": "foobar" 00:13:08.574 } 00:13:08.574 } 00:13:08.574 Got JSON-RPC error response 00:13:08.574 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:08.574 10:13:28 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:08.574 10:13:28 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32314 00:13:09.139 [2024-11-19 10:13:28.470908] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32314: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:09.139 10:13:28 -- target/invalid.sh@45 -- # out='2024/11/19 10:13:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode32314 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:13:09.139 request: 00:13:09.139 { 00:13:09.139 "method": "nvmf_create_subsystem", 00:13:09.139 "params": { 00:13:09.139 "nqn": "nqn.2016-06.io.spdk:cnode32314", 00:13:09.139 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:13:09.139 } 00:13:09.139 } 00:13:09.139 Got JSON-RPC error response 00:13:09.139 GoRPCClient: error on JSON-RPC call' 00:13:09.139 10:13:28 -- target/invalid.sh@46 -- # [[ 2024/11/19 10:13:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode32314 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:13:09.139 request: 00:13:09.139 { 00:13:09.139 "method": "nvmf_create_subsystem", 00:13:09.139 "params": { 00:13:09.139 "nqn": "nqn.2016-06.io.spdk:cnode32314", 00:13:09.139 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:13:09.139 } 00:13:09.139 } 00:13:09.139 Got JSON-RPC error response 00:13:09.139 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:09.139 10:13:28 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:09.139 10:13:28 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16330 00:13:09.483 [2024-11-19 10:13:28.743225] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16330: invalid model number 'SPDK_Controller' 00:13:09.483 10:13:28 -- target/invalid.sh@50 -- # out='2024/11/19 10:13:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode16330], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:13:09.483 request: 00:13:09.483 { 00:13:09.483 "method": "nvmf_create_subsystem", 00:13:09.483 "params": { 00:13:09.483 "nqn": "nqn.2016-06.io.spdk:cnode16330", 00:13:09.483 "model_number": "SPDK_Controller\u001f" 00:13:09.483 } 00:13:09.483 } 00:13:09.483 Got JSON-RPC error response 00:13:09.483 GoRPCClient: error on JSON-RPC call' 00:13:09.483 10:13:28 -- target/invalid.sh@51 -- # [[ 2024/11/19 10:13:28 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode16330], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:13:09.483 request: 00:13:09.483 { 00:13:09.484 "method": "nvmf_create_subsystem", 00:13:09.484 "params": { 00:13:09.484 "nqn": "nqn.2016-06.io.spdk:cnode16330", 00:13:09.484 "model_number": "SPDK_Controller\u001f" 00:13:09.484 } 00:13:09.484 } 00:13:09.484 Got JSON-RPC error response 00:13:09.484 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:09.484 10:13:28 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:09.484 10:13:28 -- target/invalid.sh@19 -- # local length=21 ll 00:13:09.484 10:13:28 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:09.484 10:13:28 -- target/invalid.sh@21 -- # local chars 00:13:09.484 10:13:28 -- target/invalid.sh@22 -- # local string 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 60 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+='<' 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 73 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=I 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 112 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=p 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 78 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=N 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 111 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=o 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 75 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=K 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 58 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=: 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 56 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=8 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 55 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=7 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 57 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=9 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 77 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=M 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 48 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=0 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 39 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=\' 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 91 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+='[' 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 96 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+='`' 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 45 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=- 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 49 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=1 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 32 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=' ' 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 50 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=2 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 116 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=t 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # printf %x 59 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:09.484 10:13:28 -- target/invalid.sh@25 -- # string+=';' 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:09.484 10:13:28 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:09.484 10:13:28 -- target/invalid.sh@28 -- # [[ < == \- ]] 00:13:09.484 10:13:28 -- target/invalid.sh@31 -- # echo ' /dev/null' 00:13:13.165 10:13:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.165 10:13:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:13.165 ************************************ 00:13:13.165 END TEST nvmf_invalid 00:13:13.165 ************************************ 00:13:13.165 00:13:13.165 real 0m6.562s 00:13:13.165 user 0m26.889s 00:13:13.165 sys 0m1.256s 00:13:13.165 10:13:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:13.165 10:13:32 -- common/autotest_common.sh@10 -- # set +x 00:13:13.165 10:13:32 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:13.165 10:13:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:13.165 10:13:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:13.165 10:13:32 -- common/autotest_common.sh@10 -- # set +x 00:13:13.166 ************************************ 00:13:13.166 START TEST nvmf_abort 00:13:13.166 ************************************ 00:13:13.166 10:13:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:13.424 * Looking for test storage... 00:13:13.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:13.424 10:13:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:13.424 10:13:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:13.424 10:13:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:13.424 10:13:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:13.424 10:13:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:13.424 10:13:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:13.424 10:13:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:13.425 10:13:32 -- scripts/common.sh@335 -- # IFS=.-: 00:13:13.425 10:13:32 -- scripts/common.sh@335 -- # read -ra ver1 00:13:13.425 10:13:32 -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.425 10:13:32 -- scripts/common.sh@336 -- # read -ra ver2 00:13:13.425 10:13:32 -- scripts/common.sh@337 -- # local 'op=<' 00:13:13.425 10:13:32 -- scripts/common.sh@339 -- # ver1_l=2 00:13:13.425 10:13:32 -- scripts/common.sh@340 -- # ver2_l=1 00:13:13.425 10:13:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:13.425 10:13:32 -- scripts/common.sh@343 -- # case "$op" in 00:13:13.425 10:13:32 -- scripts/common.sh@344 -- # : 1 00:13:13.425 10:13:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:13.425 10:13:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.425 10:13:32 -- scripts/common.sh@364 -- # decimal 1 00:13:13.425 10:13:32 -- scripts/common.sh@352 -- # local d=1 00:13:13.425 10:13:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.425 10:13:32 -- scripts/common.sh@354 -- # echo 1 00:13:13.425 10:13:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:13.425 10:13:32 -- scripts/common.sh@365 -- # decimal 2 00:13:13.425 10:13:32 -- scripts/common.sh@352 -- # local d=2 00:13:13.425 10:13:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.425 10:13:32 -- scripts/common.sh@354 -- # echo 2 00:13:13.425 10:13:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:13.425 10:13:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:13.425 10:13:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:13.425 10:13:32 -- scripts/common.sh@367 -- # return 0 00:13:13.425 10:13:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.425 10:13:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:13.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.425 --rc genhtml_branch_coverage=1 00:13:13.425 --rc genhtml_function_coverage=1 00:13:13.425 --rc genhtml_legend=1 00:13:13.425 --rc geninfo_all_blocks=1 00:13:13.425 --rc geninfo_unexecuted_blocks=1 00:13:13.425 00:13:13.425 ' 00:13:13.425 10:13:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:13.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.425 --rc genhtml_branch_coverage=1 00:13:13.425 --rc genhtml_function_coverage=1 00:13:13.425 --rc genhtml_legend=1 00:13:13.425 --rc geninfo_all_blocks=1 00:13:13.425 --rc geninfo_unexecuted_blocks=1 00:13:13.425 00:13:13.425 ' 00:13:13.425 10:13:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:13.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.425 --rc genhtml_branch_coverage=1 00:13:13.425 --rc genhtml_function_coverage=1 00:13:13.425 --rc genhtml_legend=1 00:13:13.425 --rc geninfo_all_blocks=1 00:13:13.425 --rc geninfo_unexecuted_blocks=1 00:13:13.425 00:13:13.425 ' 00:13:13.425 10:13:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:13.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.425 --rc genhtml_branch_coverage=1 00:13:13.425 --rc genhtml_function_coverage=1 00:13:13.425 --rc genhtml_legend=1 00:13:13.425 --rc geninfo_all_blocks=1 00:13:13.425 --rc geninfo_unexecuted_blocks=1 00:13:13.425 00:13:13.425 ' 00:13:13.425 10:13:32 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:13.425 10:13:32 -- nvmf/common.sh@7 -- # uname -s 00:13:13.425 10:13:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.425 10:13:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.425 10:13:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.425 10:13:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.425 10:13:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.425 10:13:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.425 10:13:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.425 10:13:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.425 10:13:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.425 10:13:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.425 10:13:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:13:13.425 10:13:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:13:13.425 10:13:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.425 10:13:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.425 10:13:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:13.425 10:13:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:13.425 10:13:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.425 10:13:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.425 10:13:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.425 10:13:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.425 10:13:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.425 10:13:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.425 10:13:32 -- paths/export.sh@5 -- # export PATH 00:13:13.425 10:13:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.425 10:13:32 -- nvmf/common.sh@46 -- # : 0 00:13:13.425 10:13:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:13.425 10:13:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:13.425 10:13:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:13.425 10:13:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.425 10:13:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.425 10:13:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:13.425 10:13:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:13.425 10:13:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:13.425 10:13:32 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.425 10:13:32 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:13.425 10:13:32 -- target/abort.sh@14 -- # nvmftestinit 00:13:13.425 10:13:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:13.425 10:13:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.425 10:13:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:13.425 10:13:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:13.425 10:13:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:13.425 10:13:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.425 10:13:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.425 10:13:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.425 10:13:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:13.425 10:13:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:13.425 10:13:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:13.425 10:13:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:13.425 10:13:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:13.425 10:13:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:13.425 10:13:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.425 10:13:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.425 10:13:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:13.425 10:13:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:13.425 10:13:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:13.425 10:13:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:13.425 10:13:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:13.425 10:13:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.425 10:13:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:13.425 10:13:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:13.425 10:13:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:13.425 10:13:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:13.425 10:13:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:13.425 10:13:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:13.425 Cannot find device "nvmf_tgt_br" 00:13:13.425 10:13:32 -- nvmf/common.sh@154 -- # true 00:13:13.425 10:13:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:13.425 Cannot find device "nvmf_tgt_br2" 00:13:13.425 10:13:32 -- nvmf/common.sh@155 -- # true 00:13:13.425 10:13:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:13.425 10:13:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:13.425 Cannot find device "nvmf_tgt_br" 00:13:13.684 10:13:32 -- nvmf/common.sh@157 -- # true 00:13:13.684 10:13:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:13.684 Cannot find device "nvmf_tgt_br2" 00:13:13.684 10:13:32 -- nvmf/common.sh@158 -- # true 00:13:13.684 10:13:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:13.684 10:13:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:13.684 10:13:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:13.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:13.684 10:13:33 -- nvmf/common.sh@161 -- # true 00:13:13.684 10:13:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:13.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:13.684 10:13:33 -- nvmf/common.sh@162 -- # true 00:13:13.684 10:13:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:13.684 10:13:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:13.684 10:13:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:13.684 10:13:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:13.684 10:13:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:13.684 10:13:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:13.684 10:13:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:13.684 10:13:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:13.684 10:13:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:13.684 10:13:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:13.684 10:13:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:13.684 10:13:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:13.684 10:13:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:13.684 10:13:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:13.684 10:13:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:13.684 10:13:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:13.684 10:13:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:13.684 10:13:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:13.684 10:13:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:13.684 10:13:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:13.684 10:13:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:13.684 10:13:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:13.684 10:13:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:13.684 10:13:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:13.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:13:13.684 00:13:13.684 --- 10.0.0.2 ping statistics --- 00:13:13.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.684 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:13:13.684 10:13:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:13.684 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:13.684 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:13:13.684 00:13:13.684 --- 10.0.0.3 ping statistics --- 00:13:13.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.684 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:13:13.684 10:13:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:13.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:13.684 00:13:13.684 --- 10.0.0.1 ping statistics --- 00:13:13.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.684 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:13.684 10:13:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.684 10:13:33 -- nvmf/common.sh@421 -- # return 0 00:13:13.684 10:13:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:13.684 10:13:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.684 10:13:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:13.684 10:13:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:13.684 10:13:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.684 10:13:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:13.684 10:13:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:13.942 10:13:33 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:13.942 10:13:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:13.942 10:13:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:13.942 10:13:33 -- common/autotest_common.sh@10 -- # set +x 00:13:13.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.942 10:13:33 -- nvmf/common.sh@469 -- # nvmfpid=78668 00:13:13.942 10:13:33 -- nvmf/common.sh@470 -- # waitforlisten 78668 00:13:13.942 10:13:33 -- common/autotest_common.sh@829 -- # '[' -z 78668 ']' 00:13:13.942 10:13:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.942 10:13:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:13.942 10:13:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.942 10:13:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.942 10:13:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.942 10:13:33 -- common/autotest_common.sh@10 -- # set +x 00:13:13.942 [2024-11-19 10:13:33.308235] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:13.942 [2024-11-19 10:13:33.308368] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.942 [2024-11-19 10:13:33.444135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:13.942 [2024-11-19 10:13:33.481647] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:13.942 [2024-11-19 10:13:33.481883] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.942 [2024-11-19 10:13:33.481911] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.942 [2024-11-19 10:13:33.481927] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.942 [2024-11-19 10:13:33.482085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.942 [2024-11-19 10:13:33.483000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.942 [2024-11-19 10:13:33.483022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.200 10:13:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.200 10:13:33 -- common/autotest_common.sh@862 -- # return 0 00:13:14.200 10:13:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:14.200 10:13:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:14.200 10:13:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.200 10:13:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.200 10:13:33 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:14.200 10:13:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.200 10:13:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.200 [2024-11-19 10:13:33.653389] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.200 10:13:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.200 10:13:33 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:14.200 10:13:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.200 10:13:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.200 Malloc0 00:13:14.200 10:13:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.200 10:13:33 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:14.200 10:13:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.200 10:13:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.200 Delay0 00:13:14.200 10:13:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.200 10:13:33 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:14.200 10:13:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.200 10:13:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.200 10:13:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.200 10:13:33 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:14.200 10:13:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.200 10:13:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.200 10:13:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.200 10:13:33 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:14.200 10:13:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.200 10:13:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.200 [2024-11-19 10:13:33.716357] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.200 10:13:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.200 10:13:33 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:14.200 10:13:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.200 10:13:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.200 10:13:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.200 10:13:33 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:14.458 [2024-11-19 10:13:33.886785] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:16.382 Initializing NVMe Controllers 00:13:16.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:16.382 controller IO queue size 128 less than required 00:13:16.382 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:16.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:16.382 Initialization complete. Launching workers. 00:13:16.382 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31741 00:13:16.382 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31802, failed to submit 62 00:13:16.382 success 31741, unsuccess 61, failed 0 00:13:16.382 10:13:35 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:16.382 10:13:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.382 10:13:35 -- common/autotest_common.sh@10 -- # set +x 00:13:16.382 10:13:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.383 10:13:35 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:16.383 10:13:35 -- target/abort.sh@38 -- # nvmftestfini 00:13:16.383 10:13:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:16.383 10:13:35 -- nvmf/common.sh@116 -- # sync 00:13:16.641 10:13:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:16.641 10:13:35 -- nvmf/common.sh@119 -- # set +e 00:13:16.641 10:13:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:16.641 10:13:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:16.641 rmmod nvme_tcp 00:13:16.641 rmmod nvme_fabrics 00:13:16.641 rmmod nvme_keyring 00:13:16.641 10:13:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:16.641 10:13:36 -- nvmf/common.sh@123 -- # set -e 00:13:16.641 10:13:36 -- nvmf/common.sh@124 -- # return 0 00:13:16.641 10:13:36 -- nvmf/common.sh@477 -- # '[' -n 78668 ']' 00:13:16.641 10:13:36 -- nvmf/common.sh@478 -- # killprocess 78668 00:13:16.641 10:13:36 -- common/autotest_common.sh@936 -- # '[' -z 78668 ']' 00:13:16.641 10:13:36 -- common/autotest_common.sh@940 -- # kill -0 78668 00:13:16.641 10:13:36 -- common/autotest_common.sh@941 -- # uname 00:13:16.641 10:13:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:16.641 10:13:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78668 00:13:16.641 killing process with pid 78668 00:13:16.641 10:13:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:16.641 10:13:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:16.641 10:13:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78668' 00:13:16.641 10:13:36 -- common/autotest_common.sh@955 -- # kill 78668 00:13:16.641 10:13:36 -- common/autotest_common.sh@960 -- # wait 78668 00:13:16.899 10:13:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:16.899 10:13:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:16.899 10:13:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:16.899 10:13:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:16.899 10:13:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:16.900 10:13:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.900 10:13:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.900 10:13:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.900 10:13:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:16.900 00:13:16.900 real 0m3.548s 00:13:16.900 user 0m10.093s 00:13:16.900 sys 0m0.859s 00:13:16.900 10:13:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:16.900 10:13:36 -- common/autotest_common.sh@10 -- # set +x 00:13:16.900 ************************************ 00:13:16.900 END TEST nvmf_abort 00:13:16.900 ************************************ 00:13:16.900 10:13:36 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:16.900 10:13:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:16.900 10:13:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:16.900 10:13:36 -- common/autotest_common.sh@10 -- # set +x 00:13:16.900 ************************************ 00:13:16.900 START TEST nvmf_ns_hotplug_stress 00:13:16.900 ************************************ 00:13:16.900 10:13:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:16.900 * Looking for test storage... 00:13:16.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:16.900 10:13:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:16.900 10:13:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:16.900 10:13:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:16.900 10:13:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:16.900 10:13:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:16.900 10:13:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:16.900 10:13:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:16.900 10:13:36 -- scripts/common.sh@335 -- # IFS=.-: 00:13:16.900 10:13:36 -- scripts/common.sh@335 -- # read -ra ver1 00:13:16.900 10:13:36 -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.900 10:13:36 -- scripts/common.sh@336 -- # read -ra ver2 00:13:16.900 10:13:36 -- scripts/common.sh@337 -- # local 'op=<' 00:13:16.900 10:13:36 -- scripts/common.sh@339 -- # ver1_l=2 00:13:16.900 10:13:36 -- scripts/common.sh@340 -- # ver2_l=1 00:13:16.900 10:13:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:16.900 10:13:36 -- scripts/common.sh@343 -- # case "$op" in 00:13:16.900 10:13:36 -- scripts/common.sh@344 -- # : 1 00:13:16.900 10:13:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:16.900 10:13:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.900 10:13:36 -- scripts/common.sh@364 -- # decimal 1 00:13:16.900 10:13:36 -- scripts/common.sh@352 -- # local d=1 00:13:16.900 10:13:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.900 10:13:36 -- scripts/common.sh@354 -- # echo 1 00:13:16.900 10:13:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:16.900 10:13:36 -- scripts/common.sh@365 -- # decimal 2 00:13:16.900 10:13:36 -- scripts/common.sh@352 -- # local d=2 00:13:16.900 10:13:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.900 10:13:36 -- scripts/common.sh@354 -- # echo 2 00:13:17.159 10:13:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:17.159 10:13:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:17.159 10:13:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:17.159 10:13:36 -- scripts/common.sh@367 -- # return 0 00:13:17.159 10:13:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.159 10:13:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:17.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.159 --rc genhtml_branch_coverage=1 00:13:17.159 --rc genhtml_function_coverage=1 00:13:17.159 --rc genhtml_legend=1 00:13:17.159 --rc geninfo_all_blocks=1 00:13:17.159 --rc geninfo_unexecuted_blocks=1 00:13:17.159 00:13:17.159 ' 00:13:17.159 10:13:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:17.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.159 --rc genhtml_branch_coverage=1 00:13:17.159 --rc genhtml_function_coverage=1 00:13:17.159 --rc genhtml_legend=1 00:13:17.159 --rc geninfo_all_blocks=1 00:13:17.159 --rc geninfo_unexecuted_blocks=1 00:13:17.160 00:13:17.160 ' 00:13:17.160 10:13:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:17.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.160 --rc genhtml_branch_coverage=1 00:13:17.160 --rc genhtml_function_coverage=1 00:13:17.160 --rc genhtml_legend=1 00:13:17.160 --rc geninfo_all_blocks=1 00:13:17.160 --rc geninfo_unexecuted_blocks=1 00:13:17.160 00:13:17.160 ' 00:13:17.160 10:13:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:17.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.160 --rc genhtml_branch_coverage=1 00:13:17.160 --rc genhtml_function_coverage=1 00:13:17.160 --rc genhtml_legend=1 00:13:17.160 --rc geninfo_all_blocks=1 00:13:17.160 --rc geninfo_unexecuted_blocks=1 00:13:17.160 00:13:17.160 ' 00:13:17.160 10:13:36 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:17.160 10:13:36 -- nvmf/common.sh@7 -- # uname -s 00:13:17.160 10:13:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.160 10:13:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.160 10:13:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.160 10:13:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.160 10:13:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.160 10:13:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.160 10:13:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.160 10:13:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.160 10:13:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.160 10:13:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.160 10:13:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:13:17.160 10:13:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:13:17.160 10:13:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.160 10:13:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.160 10:13:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:17.160 10:13:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:17.160 10:13:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.160 10:13:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.160 10:13:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.160 10:13:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.160 10:13:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.160 10:13:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.160 10:13:36 -- paths/export.sh@5 -- # export PATH 00:13:17.160 10:13:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.160 10:13:36 -- nvmf/common.sh@46 -- # : 0 00:13:17.160 10:13:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:17.160 10:13:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:17.160 10:13:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:17.160 10:13:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.160 10:13:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.160 10:13:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:17.160 10:13:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:17.160 10:13:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:17.160 10:13:36 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:17.160 10:13:36 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:17.160 10:13:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:17.160 10:13:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.160 10:13:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:17.160 10:13:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:17.160 10:13:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:17.160 10:13:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.160 10:13:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.160 10:13:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.160 10:13:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:17.160 10:13:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:17.160 10:13:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:17.160 10:13:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:17.160 10:13:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:17.160 10:13:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:17.160 10:13:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.160 10:13:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.160 10:13:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:17.160 10:13:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:17.160 10:13:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:17.160 10:13:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:17.160 10:13:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:17.160 10:13:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.160 10:13:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:17.160 10:13:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:17.160 10:13:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:17.160 10:13:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:17.160 10:13:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:17.160 10:13:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:17.160 Cannot find device "nvmf_tgt_br" 00:13:17.160 10:13:36 -- nvmf/common.sh@154 -- # true 00:13:17.160 10:13:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:17.160 Cannot find device "nvmf_tgt_br2" 00:13:17.160 10:13:36 -- nvmf/common.sh@155 -- # true 00:13:17.160 10:13:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:17.160 10:13:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:17.160 Cannot find device "nvmf_tgt_br" 00:13:17.160 10:13:36 -- nvmf/common.sh@157 -- # true 00:13:17.160 10:13:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:17.160 Cannot find device "nvmf_tgt_br2" 00:13:17.160 10:13:36 -- nvmf/common.sh@158 -- # true 00:13:17.160 10:13:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:17.160 10:13:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:17.160 10:13:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:17.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:17.160 10:13:36 -- nvmf/common.sh@161 -- # true 00:13:17.160 10:13:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:17.160 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:17.160 10:13:36 -- nvmf/common.sh@162 -- # true 00:13:17.160 10:13:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:17.160 10:13:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:17.160 10:13:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:17.160 10:13:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:17.160 10:13:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:17.160 10:13:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:17.419 10:13:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:17.419 10:13:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:17.419 10:13:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:17.419 10:13:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:17.419 10:13:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:17.419 10:13:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:17.419 10:13:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:17.419 10:13:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:17.419 10:13:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:17.419 10:13:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:17.419 10:13:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:17.419 10:13:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:17.419 10:13:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:17.419 10:13:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:17.419 10:13:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:17.420 10:13:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:17.420 10:13:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:17.420 10:13:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:17.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:13:17.420 00:13:17.420 --- 10.0.0.2 ping statistics --- 00:13:17.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.420 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:13:17.420 10:13:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:17.420 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:17.420 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:13:17.420 00:13:17.420 --- 10.0.0.3 ping statistics --- 00:13:17.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.420 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:13:17.420 10:13:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:17.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:17.420 00:13:17.420 --- 10.0.0.1 ping statistics --- 00:13:17.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.420 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:17.420 10:13:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.420 10:13:36 -- nvmf/common.sh@421 -- # return 0 00:13:17.420 10:13:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:17.420 10:13:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.420 10:13:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:17.420 10:13:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:17.420 10:13:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.420 10:13:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:17.420 10:13:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:17.420 10:13:36 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:17.420 10:13:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:17.420 10:13:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:17.420 10:13:36 -- common/autotest_common.sh@10 -- # set +x 00:13:17.420 10:13:36 -- nvmf/common.sh@469 -- # nvmfpid=78905 00:13:17.420 10:13:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:17.420 10:13:36 -- nvmf/common.sh@470 -- # waitforlisten 78905 00:13:17.420 10:13:36 -- common/autotest_common.sh@829 -- # '[' -z 78905 ']' 00:13:17.420 10:13:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.420 10:13:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:17.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.420 10:13:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.420 10:13:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:17.420 10:13:36 -- common/autotest_common.sh@10 -- # set +x 00:13:17.420 [2024-11-19 10:13:36.873095] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:17.420 [2024-11-19 10:13:36.873183] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.678 [2024-11-19 10:13:37.009719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:17.678 [2024-11-19 10:13:37.044159] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:17.678 [2024-11-19 10:13:37.044310] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.678 [2024-11-19 10:13:37.044324] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.678 [2024-11-19 10:13:37.044332] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.678 [2024-11-19 10:13:37.047852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.678 [2024-11-19 10:13:37.047941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.678 [2024-11-19 10:13:37.047951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.614 10:13:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:18.614 10:13:37 -- common/autotest_common.sh@862 -- # return 0 00:13:18.614 10:13:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:18.614 10:13:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:18.614 10:13:37 -- common/autotest_common.sh@10 -- # set +x 00:13:18.614 10:13:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.614 10:13:37 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:18.614 10:13:37 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:18.873 [2024-11-19 10:13:38.227003] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.873 10:13:38 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:19.131 10:13:38 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.389 [2024-11-19 10:13:38.752912] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.389 10:13:38 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:19.648 10:13:39 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:19.905 Malloc0 00:13:19.905 10:13:39 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:20.162 Delay0 00:13:20.163 10:13:39 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.420 10:13:39 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:20.678 NULL1 00:13:20.678 10:13:40 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:21.243 10:13:40 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79047 00:13:21.243 10:13:40 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:21.243 10:13:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:21.243 10:13:40 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.615 Read completed with error (sct=0, sc=11) 00:13:22.615 10:13:41 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.615 10:13:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:22.615 10:13:42 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:22.874 true 00:13:22.874 10:13:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:22.874 10:13:42 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.809 10:13:43 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.068 10:13:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:24.068 10:13:43 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:24.327 true 00:13:24.327 10:13:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:24.327 10:13:43 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.585 10:13:44 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.844 10:13:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:24.844 10:13:44 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:25.103 true 00:13:25.103 10:13:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:25.103 10:13:44 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.361 10:13:44 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.928 10:13:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:25.928 10:13:45 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:26.186 true 00:13:26.186 10:13:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:26.186 10:13:45 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.444 10:13:45 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.703 10:13:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:26.703 10:13:46 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:26.961 true 00:13:26.961 10:13:46 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:26.961 10:13:46 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.220 10:13:46 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.479 10:13:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:27.479 10:13:46 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:27.738 true 00:13:27.738 10:13:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:27.738 10:13:47 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.672 10:13:48 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.930 10:13:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:28.930 10:13:48 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:29.215 true 00:13:29.215 10:13:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:29.215 10:13:48 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.474 10:13:48 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.731 10:13:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:29.731 10:13:49 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:29.990 true 00:13:29.990 10:13:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:29.990 10:13:49 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.557 10:13:49 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.815 10:13:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:30.815 10:13:50 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:31.072 true 00:13:31.072 10:13:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:31.072 10:13:50 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.331 10:13:50 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.590 10:13:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:31.590 10:13:51 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:31.848 true 00:13:31.848 10:13:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:31.848 10:13:51 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.782 10:13:52 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.040 10:13:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:33.040 10:13:52 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:33.298 true 00:13:33.298 10:13:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:33.298 10:13:52 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.556 10:13:52 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.814 10:13:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:33.814 10:13:53 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:34.379 true 00:13:34.379 10:13:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:34.379 10:13:53 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.637 10:13:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.895 10:13:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:34.895 10:13:54 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:35.153 true 00:13:35.153 10:13:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:35.153 10:13:54 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.720 10:13:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.978 10:13:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:35.978 10:13:55 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:36.237 true 00:13:36.237 10:13:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:36.237 10:13:55 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.803 10:13:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.061 10:13:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:37.061 10:13:56 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:37.319 true 00:13:37.319 10:13:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:37.319 10:13:56 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.578 10:13:57 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.142 10:13:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:38.142 10:13:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:38.399 true 00:13:38.400 10:13:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:38.400 10:13:57 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.657 10:13:58 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.915 10:13:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:38.915 10:13:58 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:39.174 true 00:13:39.431 10:13:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:39.431 10:13:58 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.690 10:13:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.949 10:13:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:39.949 10:13:59 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:40.208 true 00:13:40.208 10:13:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:40.208 10:13:59 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.467 10:13:59 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.724 10:14:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:40.724 10:14:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:40.982 true 00:13:40.982 10:14:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:40.982 10:14:00 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.239 10:14:00 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.498 10:14:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:41.498 10:14:00 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:41.756 true 00:13:41.756 10:14:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:41.756 10:14:01 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.690 10:14:02 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.949 10:14:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:42.949 10:14:02 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:43.515 true 00:13:43.515 10:14:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:43.515 10:14:02 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.889 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.889 10:14:04 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.889 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.889 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.889 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.889 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.889 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.889 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.889 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.147 10:14:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:45.147 10:14:04 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:45.405 true 00:13:45.405 10:14:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:45.405 10:14:04 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.969 10:14:05 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.533 10:14:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:46.533 10:14:05 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:46.790 true 00:13:46.790 10:14:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:46.790 10:14:06 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.270 10:14:07 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:48.270 10:14:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:48.270 10:14:07 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:48.528 true 00:13:48.528 10:14:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:48.528 10:14:08 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.462 10:14:08 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.719 10:14:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:49.719 10:14:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:49.977 true 00:13:49.977 10:14:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:49.977 10:14:09 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.234 10:14:09 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.492 10:14:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:50.492 10:14:09 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:50.749 true 00:13:50.749 10:14:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:50.749 10:14:10 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.008 10:14:10 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.266 10:14:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:51.266 10:14:10 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:51.524 Initializing NVMe Controllers 00:13:51.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:51.524 Controller IO queue size 128, less than required. 00:13:51.524 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:51.524 Controller IO queue size 128, less than required. 00:13:51.524 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:51.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:51.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:51.524 Initialization complete. Launching workers. 00:13:51.524 ======================================================== 00:13:51.524 Latency(us) 00:13:51.524 Device Information : IOPS MiB/s Average min max 00:13:51.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 801.31 0.39 55340.39 3371.17 1054484.56 00:13:51.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7213.54 3.52 17748.14 2847.79 812760.16 00:13:51.524 ======================================================== 00:13:51.524 Total : 8014.85 3.91 21506.54 2847.79 1054484.56 00:13:51.524 00:13:51.783 true 00:13:51.783 10:14:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79047 00:13:51.783 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79047) - No such process 00:13:51.783 10:14:11 -- target/ns_hotplug_stress.sh@53 -- # wait 79047 00:13:51.783 10:14:11 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.041 10:14:11 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.300 10:14:11 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:52.300 10:14:11 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:52.300 10:14:11 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:52.300 10:14:11 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:52.300 10:14:11 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:52.559 null0 00:13:52.559 10:14:12 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:52.559 10:14:12 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:52.559 10:14:12 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:53.126 null1 00:13:53.126 10:14:12 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:53.126 10:14:12 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:53.126 10:14:12 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:53.753 null2 00:13:53.753 10:14:12 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:53.753 10:14:12 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:53.753 10:14:12 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:54.012 null3 00:13:54.012 10:14:13 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:54.012 10:14:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:54.012 10:14:13 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:54.577 null4 00:13:54.577 10:14:13 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:54.577 10:14:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:54.577 10:14:13 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:54.835 null5 00:13:54.835 10:14:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:54.835 10:14:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:54.835 10:14:14 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:55.093 null6 00:13:55.093 10:14:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:55.093 10:14:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.093 10:14:14 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:55.352 null7 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:55.352 10:14:14 -- target/ns_hotplug_stress.sh@66 -- # wait 80042 80043 80045 80048 80049 80051 80053 80055 00:13:55.611 10:14:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:55.611 10:14:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:55.611 10:14:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.611 10:14:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.870 10:14:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:55.870 10:14:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:55.870 10:14:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:55.870 10:14:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:55.870 10:14:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.870 10:14:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.870 10:14:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:56.129 10:14:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.129 10:14:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.129 10:14:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:56.129 10:14:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.129 10:14:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.129 10:14:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:56.129 10:14:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.129 10:14:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.129 10:14:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:56.129 10:14:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.129 10:14:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.129 10:14:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:56.129 10:14:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.129 10:14:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.129 10:14:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:56.387 10:14:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.387 10:14:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.387 10:14:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:56.387 10:14:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.387 10:14:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.387 10:14:15 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:56.387 10:14:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:56.387 10:14:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:56.645 10:14:15 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.646 10:14:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:56.646 10:14:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:56.646 10:14:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:56.646 10:14:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:56.646 10:14:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:56.904 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.904 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.904 10:14:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:56.904 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.904 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.904 10:14:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:56.904 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.904 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.904 10:14:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:56.904 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:56.904 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:56.904 10:14:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.163 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.163 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.163 10:14:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.163 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.163 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.163 10:14:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:57.163 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.163 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.163 10:14:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:57.163 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.163 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.163 10:14:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:57.163 10:14:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.163 10:14:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:57.421 10:14:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:57.421 10:14:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.421 10:14:16 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:57.421 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.421 10:14:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.421 10:14:16 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:57.679 10:14:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:57.679 10:14:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:57.679 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.679 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.679 10:14:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:57.679 10:14:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:57.938 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.938 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.938 10:14:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:57.938 10:14:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:57.938 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.938 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.938 10:14:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:57.938 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.938 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.938 10:14:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:57.938 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:57.938 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:57.938 10:14:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.196 10:14:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:58.196 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.196 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.196 10:14:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.196 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.196 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.196 10:14:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.196 10:14:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:58.454 10:14:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:58.454 10:14:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.454 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.454 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.454 10:14:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:58.454 10:14:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:58.454 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.454 10:14:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.454 10:14:17 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:58.454 10:14:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:58.454 10:14:17 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:58.712 10:14:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.712 10:14:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.712 10:14:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:58.712 10:14:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.712 10:14:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.712 10:14:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:58.971 10:14:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.971 10:14:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.971 10:14:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:58.971 10:14:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.971 10:14:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.971 10:14:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:58.971 10:14:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.971 10:14:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.971 10:14:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:58.971 10:14:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:58.971 10:14:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:58.971 10:14:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:58.971 10:14:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:58.971 10:14:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.230 10:14:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.230 10:14:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:59.230 10:14:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:59.230 10:14:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:59.230 10:14:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:59.488 10:14:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:59.488 10:14:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.488 10:14:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.488 10:14:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:59.488 10:14:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.488 10:14:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.488 10:14:18 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:59.858 10:14:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.116 10:14:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:00.116 10:14:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:00.116 10:14:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:00.116 10:14:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:00.374 10:14:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:00.374 10:14:19 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:00.374 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.374 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.374 10:14:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:00.374 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.374 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.374 10:14:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:00.374 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.374 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.375 10:14:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:00.633 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.633 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.633 10:14:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:00.633 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.633 10:14:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.633 10:14:19 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:00.633 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.633 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.633 10:14:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:00.890 10:14:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.890 10:14:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:00.890 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.890 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.890 10:14:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:00.890 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:00.890 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:00.890 10:14:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:00.891 10:14:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:00.891 10:14:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:00.891 10:14:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:01.149 10:14:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:01.149 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.149 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.149 10:14:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:01.407 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.407 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.407 10:14:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:01.407 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.408 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.408 10:14:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:01.408 10:14:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:01.408 10:14:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:01.408 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.408 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.408 10:14:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:01.408 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.408 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.408 10:14:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:01.408 10:14:20 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:01.408 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.408 10:14:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.408 10:14:20 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:01.666 10:14:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.666 10:14:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.666 10:14:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:01.666 10:14:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:01.925 10:14:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.925 10:14:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.925 10:14:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:01.925 10:14:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.925 10:14:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:01.925 10:14:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:01.925 10:14:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:01.925 10:14:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:01.925 10:14:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.925 10:14:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:02.183 10:14:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.184 10:14:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.184 10:14:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:02.184 10:14:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:02.184 10:14:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.184 10:14:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.184 10:14:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:02.184 10:14:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.184 10:14:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.184 10:14:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.184 10:14:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:02.442 10:14:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.442 10:14:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.442 10:14:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:02.442 10:14:21 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.442 10:14:21 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.442 10:14:21 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:02.442 10:14:21 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:02.700 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.700 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.700 10:14:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:02.700 10:14:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:02.700 10:14:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:02.700 10:14:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.700 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.700 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.700 10:14:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:02.700 10:14:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:02.700 10:14:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:02.959 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.959 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.959 10:14:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:02.959 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.959 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.959 10:14:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:02.959 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.959 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.959 10:14:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:02.959 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.959 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.959 10:14:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:03.219 10:14:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:03.219 10:14:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:03.219 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.219 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.219 10:14:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:03.219 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.219 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.219 10:14:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:03.219 10:14:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.477 10:14:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:03.477 10:14:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:03.477 10:14:22 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:03.477 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.477 10:14:22 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.477 10:14:22 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:03.477 10:14:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:03.735 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.735 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.735 10:14:23 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:03.735 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.735 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.735 10:14:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.735 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.735 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.735 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.735 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.994 10:14:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:03.994 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.994 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.994 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.994 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.994 10:14:23 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:04.252 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.252 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.252 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.252 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.511 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.511 10:14:23 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.511 10:14:23 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:04.511 10:14:23 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:04.511 10:14:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:04.511 10:14:23 -- nvmf/common.sh@116 -- # sync 00:14:04.511 10:14:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:04.511 10:14:23 -- nvmf/common.sh@119 -- # set +e 00:14:04.511 10:14:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:04.511 10:14:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:04.511 rmmod nvme_tcp 00:14:04.511 rmmod nvme_fabrics 00:14:04.511 rmmod nvme_keyring 00:14:04.511 10:14:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:04.511 10:14:23 -- nvmf/common.sh@123 -- # set -e 00:14:04.511 10:14:23 -- nvmf/common.sh@124 -- # return 0 00:14:04.511 10:14:23 -- nvmf/common.sh@477 -- # '[' -n 78905 ']' 00:14:04.511 10:14:23 -- nvmf/common.sh@478 -- # killprocess 78905 00:14:04.511 10:14:23 -- common/autotest_common.sh@936 -- # '[' -z 78905 ']' 00:14:04.511 10:14:23 -- common/autotest_common.sh@940 -- # kill -0 78905 00:14:04.511 10:14:23 -- common/autotest_common.sh@941 -- # uname 00:14:04.511 10:14:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:04.511 10:14:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78905 00:14:04.511 killing process with pid 78905 00:14:04.511 10:14:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:04.511 10:14:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:04.511 10:14:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78905' 00:14:04.511 10:14:23 -- common/autotest_common.sh@955 -- # kill 78905 00:14:04.511 10:14:23 -- common/autotest_common.sh@960 -- # wait 78905 00:14:04.770 10:14:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:04.770 10:14:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:04.770 10:14:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:04.770 10:14:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:04.770 10:14:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:04.770 10:14:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.770 10:14:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.770 10:14:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.770 10:14:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:04.770 ************************************ 00:14:04.770 END TEST nvmf_ns_hotplug_stress 00:14:04.770 ************************************ 00:14:04.770 00:14:04.770 real 0m47.859s 00:14:04.770 user 4m6.092s 00:14:04.770 sys 0m13.905s 00:14:04.770 10:14:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:04.770 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:14:04.770 10:14:24 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:04.770 10:14:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:04.770 10:14:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:04.770 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:14:04.770 ************************************ 00:14:04.770 START TEST nvmf_connect_stress 00:14:04.770 ************************************ 00:14:04.770 10:14:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:04.770 * Looking for test storage... 00:14:04.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:04.770 10:14:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:04.770 10:14:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:04.770 10:14:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:05.029 10:14:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:05.029 10:14:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:05.029 10:14:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:05.029 10:14:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:05.029 10:14:24 -- scripts/common.sh@335 -- # IFS=.-: 00:14:05.029 10:14:24 -- scripts/common.sh@335 -- # read -ra ver1 00:14:05.029 10:14:24 -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.029 10:14:24 -- scripts/common.sh@336 -- # read -ra ver2 00:14:05.029 10:14:24 -- scripts/common.sh@337 -- # local 'op=<' 00:14:05.029 10:14:24 -- scripts/common.sh@339 -- # ver1_l=2 00:14:05.029 10:14:24 -- scripts/common.sh@340 -- # ver2_l=1 00:14:05.029 10:14:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:05.029 10:14:24 -- scripts/common.sh@343 -- # case "$op" in 00:14:05.029 10:14:24 -- scripts/common.sh@344 -- # : 1 00:14:05.029 10:14:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:05.029 10:14:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.029 10:14:24 -- scripts/common.sh@364 -- # decimal 1 00:14:05.029 10:14:24 -- scripts/common.sh@352 -- # local d=1 00:14:05.029 10:14:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.029 10:14:24 -- scripts/common.sh@354 -- # echo 1 00:14:05.029 10:14:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:05.029 10:14:24 -- scripts/common.sh@365 -- # decimal 2 00:14:05.029 10:14:24 -- scripts/common.sh@352 -- # local d=2 00:14:05.029 10:14:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.029 10:14:24 -- scripts/common.sh@354 -- # echo 2 00:14:05.029 10:14:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:05.029 10:14:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:05.029 10:14:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:05.029 10:14:24 -- scripts/common.sh@367 -- # return 0 00:14:05.029 10:14:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.029 10:14:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:05.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.029 --rc genhtml_branch_coverage=1 00:14:05.029 --rc genhtml_function_coverage=1 00:14:05.029 --rc genhtml_legend=1 00:14:05.029 --rc geninfo_all_blocks=1 00:14:05.029 --rc geninfo_unexecuted_blocks=1 00:14:05.029 00:14:05.029 ' 00:14:05.029 10:14:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:05.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.029 --rc genhtml_branch_coverage=1 00:14:05.029 --rc genhtml_function_coverage=1 00:14:05.029 --rc genhtml_legend=1 00:14:05.029 --rc geninfo_all_blocks=1 00:14:05.029 --rc geninfo_unexecuted_blocks=1 00:14:05.029 00:14:05.029 ' 00:14:05.029 10:14:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:05.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.029 --rc genhtml_branch_coverage=1 00:14:05.029 --rc genhtml_function_coverage=1 00:14:05.029 --rc genhtml_legend=1 00:14:05.029 --rc geninfo_all_blocks=1 00:14:05.029 --rc geninfo_unexecuted_blocks=1 00:14:05.029 00:14:05.029 ' 00:14:05.029 10:14:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:05.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.029 --rc genhtml_branch_coverage=1 00:14:05.029 --rc genhtml_function_coverage=1 00:14:05.029 --rc genhtml_legend=1 00:14:05.029 --rc geninfo_all_blocks=1 00:14:05.029 --rc geninfo_unexecuted_blocks=1 00:14:05.029 00:14:05.029 ' 00:14:05.029 10:14:24 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:05.029 10:14:24 -- nvmf/common.sh@7 -- # uname -s 00:14:05.029 10:14:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.029 10:14:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.029 10:14:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.029 10:14:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.029 10:14:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.029 10:14:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.029 10:14:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.029 10:14:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.029 10:14:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.029 10:14:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.029 10:14:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:14:05.029 10:14:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:14:05.029 10:14:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.029 10:14:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.029 10:14:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:05.029 10:14:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.029 10:14:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.029 10:14:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.029 10:14:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.029 10:14:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.029 10:14:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.029 10:14:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.029 10:14:24 -- paths/export.sh@5 -- # export PATH 00:14:05.029 10:14:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.030 10:14:24 -- nvmf/common.sh@46 -- # : 0 00:14:05.030 10:14:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:05.030 10:14:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:05.030 10:14:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:05.030 10:14:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.030 10:14:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.030 10:14:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:05.030 10:14:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:05.030 10:14:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:05.030 10:14:24 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:05.030 10:14:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:05.030 10:14:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.030 10:14:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:05.030 10:14:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:05.030 10:14:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:05.030 10:14:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.030 10:14:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.030 10:14:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.030 10:14:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:05.030 10:14:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:05.030 10:14:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:05.030 10:14:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:05.030 10:14:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:05.030 10:14:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:05.030 10:14:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.030 10:14:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.030 10:14:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:05.030 10:14:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:05.030 10:14:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:05.030 10:14:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:05.030 10:14:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:05.030 10:14:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.030 10:14:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:05.030 10:14:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:05.030 10:14:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:05.030 10:14:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:05.030 10:14:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:05.030 10:14:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:05.030 Cannot find device "nvmf_tgt_br" 00:14:05.030 10:14:24 -- nvmf/common.sh@154 -- # true 00:14:05.030 10:14:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:05.030 Cannot find device "nvmf_tgt_br2" 00:14:05.030 10:14:24 -- nvmf/common.sh@155 -- # true 00:14:05.030 10:14:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:05.030 10:14:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:05.030 Cannot find device "nvmf_tgt_br" 00:14:05.030 10:14:24 -- nvmf/common.sh@157 -- # true 00:14:05.030 10:14:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:05.030 Cannot find device "nvmf_tgt_br2" 00:14:05.030 10:14:24 -- nvmf/common.sh@158 -- # true 00:14:05.030 10:14:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:05.030 10:14:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:05.030 10:14:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:05.030 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.030 10:14:24 -- nvmf/common.sh@161 -- # true 00:14:05.030 10:14:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:05.030 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.030 10:14:24 -- nvmf/common.sh@162 -- # true 00:14:05.030 10:14:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:05.030 10:14:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:05.030 10:14:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:05.030 10:14:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:05.030 10:14:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:05.030 10:14:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:05.030 10:14:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:05.289 10:14:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:05.289 10:14:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:05.289 10:14:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:05.289 10:14:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:05.289 10:14:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:05.289 10:14:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:05.289 10:14:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:05.289 10:14:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:05.289 10:14:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:05.289 10:14:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:05.289 10:14:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:05.289 10:14:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:05.289 10:14:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:05.289 10:14:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:05.289 10:14:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:05.289 10:14:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:05.289 10:14:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:05.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:14:05.289 00:14:05.289 --- 10.0.0.2 ping statistics --- 00:14:05.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.289 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:14:05.289 10:14:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:05.289 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:05.289 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:14:05.289 00:14:05.289 --- 10.0.0.3 ping statistics --- 00:14:05.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.289 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:05.289 10:14:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:05.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:05.289 00:14:05.289 --- 10.0.0.1 ping statistics --- 00:14:05.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.289 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:05.289 10:14:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.289 10:14:24 -- nvmf/common.sh@421 -- # return 0 00:14:05.289 10:14:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:05.289 10:14:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.289 10:14:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:05.289 10:14:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:05.289 10:14:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.289 10:14:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:05.289 10:14:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:05.289 10:14:24 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:05.289 10:14:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:05.289 10:14:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:05.289 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:14:05.289 10:14:24 -- nvmf/common.sh@469 -- # nvmfpid=81420 00:14:05.289 10:14:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:05.289 10:14:24 -- nvmf/common.sh@470 -- # waitforlisten 81420 00:14:05.289 10:14:24 -- common/autotest_common.sh@829 -- # '[' -z 81420 ']' 00:14:05.289 10:14:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.289 10:14:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:05.289 10:14:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.289 10:14:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:05.289 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:14:05.289 [2024-11-19 10:14:24.805666] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:05.289 [2024-11-19 10:14:24.805834] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.574 [2024-11-19 10:14:24.947922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:05.574 [2024-11-19 10:14:24.988733] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:05.574 [2024-11-19 10:14:24.988961] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.574 [2024-11-19 10:14:24.988984] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.574 [2024-11-19 10:14:24.988998] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.574 [2024-11-19 10:14:24.989076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.574 [2024-11-19 10:14:24.989661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.574 [2024-11-19 10:14:24.989682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.555 10:14:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.556 10:14:25 -- common/autotest_common.sh@862 -- # return 0 00:14:06.556 10:14:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:06.556 10:14:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.556 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:14:06.556 10:14:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.556 10:14:26 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.556 10:14:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.556 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:14:06.556 [2024-11-19 10:14:26.020907] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.556 10:14:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.556 10:14:26 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:06.556 10:14:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.556 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:14:06.556 10:14:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.556 10:14:26 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.556 10:14:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.556 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:14:06.556 [2024-11-19 10:14:26.041079] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.556 10:14:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.556 10:14:26 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:06.556 10:14:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.556 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:14:06.556 NULL1 00:14:06.556 10:14:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.556 10:14:26 -- target/connect_stress.sh@21 -- # PERF_PID=81472 00:14:06.556 10:14:26 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:06.556 10:14:26 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:14:06.556 10:14:26 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.556 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.556 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.556 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.556 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.556 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.556 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.556 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.556 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.556 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.556 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.556 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.556 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.556 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.556 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.556 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.556 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.556 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.814 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.814 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.814 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.814 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.814 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.814 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.814 10:14:26 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.814 10:14:26 -- target/connect_stress.sh@28 -- # cat 00:14:06.814 10:14:26 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:06.814 10:14:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.814 10:14:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.814 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:14:07.073 10:14:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.073 10:14:26 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:07.073 10:14:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.073 10:14:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.073 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:14:07.331 10:14:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.331 10:14:26 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:07.331 10:14:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.331 10:14:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.331 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:14:07.589 10:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.589 10:14:27 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:07.589 10:14:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.589 10:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.589 10:14:27 -- common/autotest_common.sh@10 -- # set +x 00:14:08.154 10:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.154 10:14:27 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:08.154 10:14:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.154 10:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.154 10:14:27 -- common/autotest_common.sh@10 -- # set +x 00:14:08.412 10:14:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.412 10:14:27 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:08.412 10:14:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.412 10:14:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.412 10:14:27 -- common/autotest_common.sh@10 -- # set +x 00:14:08.670 10:14:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.670 10:14:28 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:08.670 10:14:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.670 10:14:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.670 10:14:28 -- common/autotest_common.sh@10 -- # set +x 00:14:08.928 10:14:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.928 10:14:28 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:08.928 10:14:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.928 10:14:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.928 10:14:28 -- common/autotest_common.sh@10 -- # set +x 00:14:09.186 10:14:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.186 10:14:28 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:09.186 10:14:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.186 10:14:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.186 10:14:28 -- common/autotest_common.sh@10 -- # set +x 00:14:09.753 10:14:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.753 10:14:29 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:09.753 10:14:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.753 10:14:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.753 10:14:29 -- common/autotest_common.sh@10 -- # set +x 00:14:10.012 10:14:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.012 10:14:29 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:10.012 10:14:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.012 10:14:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.012 10:14:29 -- common/autotest_common.sh@10 -- # set +x 00:14:10.270 10:14:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.270 10:14:29 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:10.270 10:14:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.270 10:14:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.270 10:14:29 -- common/autotest_common.sh@10 -- # set +x 00:14:10.528 10:14:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.528 10:14:29 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:10.529 10:14:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.529 10:14:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.529 10:14:29 -- common/autotest_common.sh@10 -- # set +x 00:14:10.788 10:14:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.788 10:14:30 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:10.788 10:14:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.788 10:14:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.788 10:14:30 -- common/autotest_common.sh@10 -- # set +x 00:14:11.353 10:14:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.353 10:14:30 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:11.353 10:14:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.353 10:14:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.353 10:14:30 -- common/autotest_common.sh@10 -- # set +x 00:14:11.611 10:14:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.611 10:14:30 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:11.611 10:14:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.611 10:14:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.611 10:14:30 -- common/autotest_common.sh@10 -- # set +x 00:14:11.870 10:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.870 10:14:31 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:11.870 10:14:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.870 10:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.870 10:14:31 -- common/autotest_common.sh@10 -- # set +x 00:14:12.127 10:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.127 10:14:31 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:12.127 10:14:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.127 10:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.128 10:14:31 -- common/autotest_common.sh@10 -- # set +x 00:14:12.385 10:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.385 10:14:31 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:12.385 10:14:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.385 10:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.385 10:14:31 -- common/autotest_common.sh@10 -- # set +x 00:14:12.952 10:14:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.952 10:14:32 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:12.952 10:14:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.952 10:14:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.952 10:14:32 -- common/autotest_common.sh@10 -- # set +x 00:14:13.210 10:14:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.210 10:14:32 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:13.210 10:14:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.210 10:14:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.210 10:14:32 -- common/autotest_common.sh@10 -- # set +x 00:14:13.468 10:14:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.468 10:14:32 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:13.468 10:14:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.468 10:14:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.468 10:14:32 -- common/autotest_common.sh@10 -- # set +x 00:14:13.726 10:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.726 10:14:33 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:13.726 10:14:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.726 10:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.726 10:14:33 -- common/autotest_common.sh@10 -- # set +x 00:14:13.984 10:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.984 10:14:33 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:13.984 10:14:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.984 10:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.984 10:14:33 -- common/autotest_common.sh@10 -- # set +x 00:14:14.550 10:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.550 10:14:33 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:14.550 10:14:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.550 10:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.550 10:14:33 -- common/autotest_common.sh@10 -- # set +x 00:14:14.808 10:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.808 10:14:34 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:14.808 10:14:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.808 10:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.808 10:14:34 -- common/autotest_common.sh@10 -- # set +x 00:14:15.066 10:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.066 10:14:34 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:15.066 10:14:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.066 10:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.066 10:14:34 -- common/autotest_common.sh@10 -- # set +x 00:14:15.324 10:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.324 10:14:34 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:15.324 10:14:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.324 10:14:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.324 10:14:34 -- common/autotest_common.sh@10 -- # set +x 00:14:15.889 10:14:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.889 10:14:35 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:15.889 10:14:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.889 10:14:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.889 10:14:35 -- common/autotest_common.sh@10 -- # set +x 00:14:16.148 10:14:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.148 10:14:35 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:16.148 10:14:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.148 10:14:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.148 10:14:35 -- common/autotest_common.sh@10 -- # set +x 00:14:16.465 10:14:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.465 10:14:35 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:16.465 10:14:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.465 10:14:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.465 10:14:35 -- common/autotest_common.sh@10 -- # set +x 00:14:16.757 10:14:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.757 10:14:36 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:16.757 10:14:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.757 10:14:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.757 10:14:36 -- common/autotest_common.sh@10 -- # set +x 00:14:16.757 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:17.017 10:14:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.017 10:14:36 -- target/connect_stress.sh@34 -- # kill -0 81472 00:14:17.017 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81472) - No such process 00:14:17.017 10:14:36 -- target/connect_stress.sh@38 -- # wait 81472 00:14:17.017 10:14:36 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:14:17.017 10:14:36 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:17.017 10:14:36 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:17.017 10:14:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:17.017 10:14:36 -- nvmf/common.sh@116 -- # sync 00:14:17.017 10:14:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:17.017 10:14:36 -- nvmf/common.sh@119 -- # set +e 00:14:17.017 10:14:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:17.017 10:14:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:17.017 rmmod nvme_tcp 00:14:17.017 rmmod nvme_fabrics 00:14:17.017 rmmod nvme_keyring 00:14:17.017 10:14:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:17.017 10:14:36 -- nvmf/common.sh@123 -- # set -e 00:14:17.017 10:14:36 -- nvmf/common.sh@124 -- # return 0 00:14:17.017 10:14:36 -- nvmf/common.sh@477 -- # '[' -n 81420 ']' 00:14:17.017 10:14:36 -- nvmf/common.sh@478 -- # killprocess 81420 00:14:17.017 10:14:36 -- common/autotest_common.sh@936 -- # '[' -z 81420 ']' 00:14:17.017 10:14:36 -- common/autotest_common.sh@940 -- # kill -0 81420 00:14:17.017 10:14:36 -- common/autotest_common.sh@941 -- # uname 00:14:17.017 10:14:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:17.017 10:14:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81420 00:14:17.017 10:14:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:17.017 killing process with pid 81420 00:14:17.017 10:14:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:17.017 10:14:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81420' 00:14:17.017 10:14:36 -- common/autotest_common.sh@955 -- # kill 81420 00:14:17.017 10:14:36 -- common/autotest_common.sh@960 -- # wait 81420 00:14:17.275 10:14:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:17.275 10:14:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:17.275 10:14:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:17.275 10:14:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.275 10:14:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:17.275 10:14:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.275 10:14:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.275 10:14:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.275 10:14:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:17.275 00:14:17.275 real 0m12.541s 00:14:17.275 user 0m41.590s 00:14:17.275 sys 0m3.307s 00:14:17.275 10:14:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:17.275 10:14:36 -- common/autotest_common.sh@10 -- # set +x 00:14:17.275 ************************************ 00:14:17.275 END TEST nvmf_connect_stress 00:14:17.275 ************************************ 00:14:17.275 10:14:36 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:17.275 10:14:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:17.275 10:14:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:17.275 10:14:36 -- common/autotest_common.sh@10 -- # set +x 00:14:17.275 ************************************ 00:14:17.275 START TEST nvmf_fused_ordering 00:14:17.275 ************************************ 00:14:17.275 10:14:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:17.534 * Looking for test storage... 00:14:17.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:17.534 10:14:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:17.534 10:14:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:17.534 10:14:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:17.534 10:14:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:17.534 10:14:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:17.534 10:14:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:17.534 10:14:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:17.534 10:14:36 -- scripts/common.sh@335 -- # IFS=.-: 00:14:17.534 10:14:36 -- scripts/common.sh@335 -- # read -ra ver1 00:14:17.534 10:14:36 -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.534 10:14:36 -- scripts/common.sh@336 -- # read -ra ver2 00:14:17.534 10:14:36 -- scripts/common.sh@337 -- # local 'op=<' 00:14:17.534 10:14:36 -- scripts/common.sh@339 -- # ver1_l=2 00:14:17.534 10:14:36 -- scripts/common.sh@340 -- # ver2_l=1 00:14:17.534 10:14:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:17.534 10:14:36 -- scripts/common.sh@343 -- # case "$op" in 00:14:17.534 10:14:36 -- scripts/common.sh@344 -- # : 1 00:14:17.534 10:14:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:17.534 10:14:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.534 10:14:36 -- scripts/common.sh@364 -- # decimal 1 00:14:17.534 10:14:36 -- scripts/common.sh@352 -- # local d=1 00:14:17.534 10:14:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.534 10:14:36 -- scripts/common.sh@354 -- # echo 1 00:14:17.534 10:14:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:17.534 10:14:36 -- scripts/common.sh@365 -- # decimal 2 00:14:17.534 10:14:36 -- scripts/common.sh@352 -- # local d=2 00:14:17.534 10:14:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.534 10:14:36 -- scripts/common.sh@354 -- # echo 2 00:14:17.534 10:14:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:17.534 10:14:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:17.534 10:14:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:17.534 10:14:36 -- scripts/common.sh@367 -- # return 0 00:14:17.534 10:14:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.534 10:14:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:17.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.534 --rc genhtml_branch_coverage=1 00:14:17.534 --rc genhtml_function_coverage=1 00:14:17.534 --rc genhtml_legend=1 00:14:17.534 --rc geninfo_all_blocks=1 00:14:17.534 --rc geninfo_unexecuted_blocks=1 00:14:17.534 00:14:17.534 ' 00:14:17.534 10:14:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:17.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.534 --rc genhtml_branch_coverage=1 00:14:17.534 --rc genhtml_function_coverage=1 00:14:17.534 --rc genhtml_legend=1 00:14:17.534 --rc geninfo_all_blocks=1 00:14:17.534 --rc geninfo_unexecuted_blocks=1 00:14:17.534 00:14:17.534 ' 00:14:17.534 10:14:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:17.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.534 --rc genhtml_branch_coverage=1 00:14:17.534 --rc genhtml_function_coverage=1 00:14:17.534 --rc genhtml_legend=1 00:14:17.534 --rc geninfo_all_blocks=1 00:14:17.534 --rc geninfo_unexecuted_blocks=1 00:14:17.534 00:14:17.534 ' 00:14:17.534 10:14:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:17.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.534 --rc genhtml_branch_coverage=1 00:14:17.534 --rc genhtml_function_coverage=1 00:14:17.534 --rc genhtml_legend=1 00:14:17.534 --rc geninfo_all_blocks=1 00:14:17.534 --rc geninfo_unexecuted_blocks=1 00:14:17.534 00:14:17.534 ' 00:14:17.534 10:14:36 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:17.534 10:14:36 -- nvmf/common.sh@7 -- # uname -s 00:14:17.534 10:14:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.534 10:14:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.534 10:14:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.534 10:14:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.534 10:14:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.534 10:14:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.534 10:14:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.534 10:14:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.534 10:14:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.534 10:14:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.534 10:14:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:14:17.534 10:14:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:14:17.534 10:14:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.534 10:14:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.534 10:14:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:17.534 10:14:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:17.534 10:14:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.534 10:14:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.534 10:14:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.534 10:14:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.534 10:14:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.534 10:14:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.534 10:14:36 -- paths/export.sh@5 -- # export PATH 00:14:17.534 10:14:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.534 10:14:36 -- nvmf/common.sh@46 -- # : 0 00:14:17.534 10:14:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:17.534 10:14:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:17.534 10:14:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:17.534 10:14:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.534 10:14:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.534 10:14:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:17.534 10:14:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:17.534 10:14:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:17.534 10:14:36 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:17.534 10:14:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:17.534 10:14:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.534 10:14:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:17.534 10:14:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:17.534 10:14:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:17.535 10:14:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.535 10:14:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.535 10:14:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.535 10:14:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:17.535 10:14:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:17.535 10:14:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:17.535 10:14:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:17.535 10:14:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:17.535 10:14:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:17.535 10:14:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.535 10:14:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.535 10:14:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:17.535 10:14:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:17.535 10:14:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:17.535 10:14:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:17.535 10:14:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:17.535 10:14:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.535 10:14:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:17.535 10:14:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:17.535 10:14:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:17.535 10:14:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:17.535 10:14:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:17.535 10:14:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:17.535 Cannot find device "nvmf_tgt_br" 00:14:17.535 10:14:36 -- nvmf/common.sh@154 -- # true 00:14:17.535 10:14:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:17.535 Cannot find device "nvmf_tgt_br2" 00:14:17.535 10:14:36 -- nvmf/common.sh@155 -- # true 00:14:17.535 10:14:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:17.535 10:14:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:17.535 Cannot find device "nvmf_tgt_br" 00:14:17.535 10:14:36 -- nvmf/common.sh@157 -- # true 00:14:17.535 10:14:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:17.535 Cannot find device "nvmf_tgt_br2" 00:14:17.535 10:14:37 -- nvmf/common.sh@158 -- # true 00:14:17.535 10:14:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:17.535 10:14:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:17.793 10:14:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:17.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:17.793 10:14:37 -- nvmf/common.sh@161 -- # true 00:14:17.793 10:14:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:17.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:17.793 10:14:37 -- nvmf/common.sh@162 -- # true 00:14:17.793 10:14:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:17.793 10:14:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:17.793 10:14:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:17.793 10:14:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:17.793 10:14:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:17.793 10:14:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:17.793 10:14:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:17.793 10:14:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:17.793 10:14:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:17.793 10:14:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:17.793 10:14:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:17.793 10:14:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:17.793 10:14:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:17.793 10:14:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:17.793 10:14:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:17.793 10:14:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:17.793 10:14:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:17.793 10:14:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:17.793 10:14:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:17.793 10:14:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:17.793 10:14:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:17.793 10:14:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:17.793 10:14:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:17.793 10:14:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:17.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:14:17.793 00:14:17.793 --- 10.0.0.2 ping statistics --- 00:14:17.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.793 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:17.793 10:14:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:17.793 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:17.793 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:14:17.794 00:14:17.794 --- 10.0.0.3 ping statistics --- 00:14:17.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.794 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:17.794 10:14:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:17.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:17.794 00:14:17.794 --- 10.0.0.1 ping statistics --- 00:14:17.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.794 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:17.794 10:14:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.794 10:14:37 -- nvmf/common.sh@421 -- # return 0 00:14:17.794 10:14:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:17.794 10:14:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.794 10:14:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:17.794 10:14:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:17.794 10:14:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.794 10:14:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:17.794 10:14:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:17.794 10:14:37 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:17.794 10:14:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:17.794 10:14:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:17.794 10:14:37 -- common/autotest_common.sh@10 -- # set +x 00:14:17.794 10:14:37 -- nvmf/common.sh@469 -- # nvmfpid=81808 00:14:17.794 10:14:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:17.794 10:14:37 -- nvmf/common.sh@470 -- # waitforlisten 81808 00:14:17.794 10:14:37 -- common/autotest_common.sh@829 -- # '[' -z 81808 ']' 00:14:17.794 10:14:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.794 10:14:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.794 10:14:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.794 10:14:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.794 10:14:37 -- common/autotest_common.sh@10 -- # set +x 00:14:18.051 [2024-11-19 10:14:37.382668] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:18.051 [2024-11-19 10:14:37.382797] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.051 [2024-11-19 10:14:37.527680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.051 [2024-11-19 10:14:37.563289] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:18.051 [2024-11-19 10:14:37.563439] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.051 [2024-11-19 10:14:37.563452] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.051 [2024-11-19 10:14:37.563460] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.051 [2024-11-19 10:14:37.563487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.985 10:14:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.985 10:14:38 -- common/autotest_common.sh@862 -- # return 0 00:14:18.985 10:14:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:18.985 10:14:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:18.985 10:14:38 -- common/autotest_common.sh@10 -- # set +x 00:14:18.985 10:14:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.985 10:14:38 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:18.985 10:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.985 10:14:38 -- common/autotest_common.sh@10 -- # set +x 00:14:18.985 [2024-11-19 10:14:38.461075] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.985 10:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.985 10:14:38 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:18.985 10:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.985 10:14:38 -- common/autotest_common.sh@10 -- # set +x 00:14:18.985 10:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.985 10:14:38 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.985 10:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.985 10:14:38 -- common/autotest_common.sh@10 -- # set +x 00:14:18.985 [2024-11-19 10:14:38.477198] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.985 10:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.985 10:14:38 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:18.985 10:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.985 10:14:38 -- common/autotest_common.sh@10 -- # set +x 00:14:18.985 NULL1 00:14:18.985 10:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.985 10:14:38 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:18.985 10:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.985 10:14:38 -- common/autotest_common.sh@10 -- # set +x 00:14:18.985 10:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.985 10:14:38 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:18.985 10:14:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.985 10:14:38 -- common/autotest_common.sh@10 -- # set +x 00:14:18.985 10:14:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.985 10:14:38 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:18.985 [2024-11-19 10:14:38.528886] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:18.985 [2024-11-19 10:14:38.528952] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81860 ] 00:14:19.552 Attached to nqn.2016-06.io.spdk:cnode1 00:14:19.552 Namespace ID: 1 size: 1GB 00:14:19.552 fused_ordering(0) 00:14:19.552 fused_ordering(1) 00:14:19.552 fused_ordering(2) 00:14:19.552 fused_ordering(3) 00:14:19.552 fused_ordering(4) 00:14:19.552 fused_ordering(5) 00:14:19.552 fused_ordering(6) 00:14:19.552 fused_ordering(7) 00:14:19.552 fused_ordering(8) 00:14:19.552 fused_ordering(9) 00:14:19.552 fused_ordering(10) 00:14:19.552 fused_ordering(11) 00:14:19.552 fused_ordering(12) 00:14:19.552 fused_ordering(13) 00:14:19.552 fused_ordering(14) 00:14:19.552 fused_ordering(15) 00:14:19.552 fused_ordering(16) 00:14:19.552 fused_ordering(17) 00:14:19.552 fused_ordering(18) 00:14:19.552 fused_ordering(19) 00:14:19.552 fused_ordering(20) 00:14:19.552 fused_ordering(21) 00:14:19.552 fused_ordering(22) 00:14:19.552 fused_ordering(23) 00:14:19.552 fused_ordering(24) 00:14:19.552 fused_ordering(25) 00:14:19.552 fused_ordering(26) 00:14:19.552 fused_ordering(27) 00:14:19.552 fused_ordering(28) 00:14:19.552 fused_ordering(29) 00:14:19.552 fused_ordering(30) 00:14:19.552 fused_ordering(31) 00:14:19.552 fused_ordering(32) 00:14:19.552 fused_ordering(33) 00:14:19.552 fused_ordering(34) 00:14:19.552 fused_ordering(35) 00:14:19.552 fused_ordering(36) 00:14:19.552 fused_ordering(37) 00:14:19.552 fused_ordering(38) 00:14:19.552 fused_ordering(39) 00:14:19.552 fused_ordering(40) 00:14:19.552 fused_ordering(41) 00:14:19.552 fused_ordering(42) 00:14:19.552 fused_ordering(43) 00:14:19.552 fused_ordering(44) 00:14:19.552 fused_ordering(45) 00:14:19.552 fused_ordering(46) 00:14:19.552 fused_ordering(47) 00:14:19.552 fused_ordering(48) 00:14:19.552 fused_ordering(49) 00:14:19.552 fused_ordering(50) 00:14:19.552 fused_ordering(51) 00:14:19.552 fused_ordering(52) 00:14:19.552 fused_ordering(53) 00:14:19.552 fused_ordering(54) 00:14:19.552 fused_ordering(55) 00:14:19.552 fused_ordering(56) 00:14:19.552 fused_ordering(57) 00:14:19.552 fused_ordering(58) 00:14:19.552 fused_ordering(59) 00:14:19.552 fused_ordering(60) 00:14:19.552 fused_ordering(61) 00:14:19.552 fused_ordering(62) 00:14:19.552 fused_ordering(63) 00:14:19.552 fused_ordering(64) 00:14:19.552 fused_ordering(65) 00:14:19.552 fused_ordering(66) 00:14:19.552 fused_ordering(67) 00:14:19.552 fused_ordering(68) 00:14:19.552 fused_ordering(69) 00:14:19.552 fused_ordering(70) 00:14:19.552 fused_ordering(71) 00:14:19.552 fused_ordering(72) 00:14:19.552 fused_ordering(73) 00:14:19.552 fused_ordering(74) 00:14:19.552 fused_ordering(75) 00:14:19.552 fused_ordering(76) 00:14:19.552 fused_ordering(77) 00:14:19.552 fused_ordering(78) 00:14:19.552 fused_ordering(79) 00:14:19.552 fused_ordering(80) 00:14:19.552 fused_ordering(81) 00:14:19.552 fused_ordering(82) 00:14:19.552 fused_ordering(83) 00:14:19.552 fused_ordering(84) 00:14:19.552 fused_ordering(85) 00:14:19.552 fused_ordering(86) 00:14:19.552 fused_ordering(87) 00:14:19.552 fused_ordering(88) 00:14:19.552 fused_ordering(89) 00:14:19.552 fused_ordering(90) 00:14:19.552 fused_ordering(91) 00:14:19.552 fused_ordering(92) 00:14:19.552 fused_ordering(93) 00:14:19.552 fused_ordering(94) 00:14:19.552 fused_ordering(95) 00:14:19.552 fused_ordering(96) 00:14:19.552 fused_ordering(97) 00:14:19.552 fused_ordering(98) 00:14:19.552 fused_ordering(99) 00:14:19.552 fused_ordering(100) 00:14:19.552 fused_ordering(101) 00:14:19.552 fused_ordering(102) 00:14:19.552 fused_ordering(103) 00:14:19.552 fused_ordering(104) 00:14:19.552 fused_ordering(105) 00:14:19.552 fused_ordering(106) 00:14:19.552 fused_ordering(107) 00:14:19.552 fused_ordering(108) 00:14:19.552 fused_ordering(109) 00:14:19.552 fused_ordering(110) 00:14:19.552 fused_ordering(111) 00:14:19.552 fused_ordering(112) 00:14:19.552 fused_ordering(113) 00:14:19.552 fused_ordering(114) 00:14:19.552 fused_ordering(115) 00:14:19.552 fused_ordering(116) 00:14:19.552 fused_ordering(117) 00:14:19.552 fused_ordering(118) 00:14:19.552 fused_ordering(119) 00:14:19.552 fused_ordering(120) 00:14:19.552 fused_ordering(121) 00:14:19.552 fused_ordering(122) 00:14:19.552 fused_ordering(123) 00:14:19.552 fused_ordering(124) 00:14:19.552 fused_ordering(125) 00:14:19.552 fused_ordering(126) 00:14:19.552 fused_ordering(127) 00:14:19.552 fused_ordering(128) 00:14:19.552 fused_ordering(129) 00:14:19.552 fused_ordering(130) 00:14:19.552 fused_ordering(131) 00:14:19.552 fused_ordering(132) 00:14:19.552 fused_ordering(133) 00:14:19.552 fused_ordering(134) 00:14:19.552 fused_ordering(135) 00:14:19.552 fused_ordering(136) 00:14:19.552 fused_ordering(137) 00:14:19.552 fused_ordering(138) 00:14:19.552 fused_ordering(139) 00:14:19.552 fused_ordering(140) 00:14:19.552 fused_ordering(141) 00:14:19.552 fused_ordering(142) 00:14:19.552 fused_ordering(143) 00:14:19.552 fused_ordering(144) 00:14:19.552 fused_ordering(145) 00:14:19.552 fused_ordering(146) 00:14:19.552 fused_ordering(147) 00:14:19.552 fused_ordering(148) 00:14:19.552 fused_ordering(149) 00:14:19.552 fused_ordering(150) 00:14:19.552 fused_ordering(151) 00:14:19.552 fused_ordering(152) 00:14:19.552 fused_ordering(153) 00:14:19.552 fused_ordering(154) 00:14:19.552 fused_ordering(155) 00:14:19.552 fused_ordering(156) 00:14:19.552 fused_ordering(157) 00:14:19.552 fused_ordering(158) 00:14:19.552 fused_ordering(159) 00:14:19.552 fused_ordering(160) 00:14:19.552 fused_ordering(161) 00:14:19.552 fused_ordering(162) 00:14:19.552 fused_ordering(163) 00:14:19.552 fused_ordering(164) 00:14:19.552 fused_ordering(165) 00:14:19.552 fused_ordering(166) 00:14:19.552 fused_ordering(167) 00:14:19.552 fused_ordering(168) 00:14:19.552 fused_ordering(169) 00:14:19.552 fused_ordering(170) 00:14:19.552 fused_ordering(171) 00:14:19.552 fused_ordering(172) 00:14:19.552 fused_ordering(173) 00:14:19.553 fused_ordering(174) 00:14:19.553 fused_ordering(175) 00:14:19.553 fused_ordering(176) 00:14:19.553 fused_ordering(177) 00:14:19.553 fused_ordering(178) 00:14:19.553 fused_ordering(179) 00:14:19.553 fused_ordering(180) 00:14:19.553 fused_ordering(181) 00:14:19.553 fused_ordering(182) 00:14:19.553 fused_ordering(183) 00:14:19.553 fused_ordering(184) 00:14:19.553 fused_ordering(185) 00:14:19.553 fused_ordering(186) 00:14:19.553 fused_ordering(187) 00:14:19.553 fused_ordering(188) 00:14:19.553 fused_ordering(189) 00:14:19.553 fused_ordering(190) 00:14:19.553 fused_ordering(191) 00:14:19.553 fused_ordering(192) 00:14:19.553 fused_ordering(193) 00:14:19.553 fused_ordering(194) 00:14:19.553 fused_ordering(195) 00:14:19.553 fused_ordering(196) 00:14:19.553 fused_ordering(197) 00:14:19.553 fused_ordering(198) 00:14:19.553 fused_ordering(199) 00:14:19.553 fused_ordering(200) 00:14:19.553 fused_ordering(201) 00:14:19.553 fused_ordering(202) 00:14:19.553 fused_ordering(203) 00:14:19.553 fused_ordering(204) 00:14:19.553 fused_ordering(205) 00:14:19.814 fused_ordering(206) 00:14:19.814 fused_ordering(207) 00:14:19.814 fused_ordering(208) 00:14:19.814 fused_ordering(209) 00:14:19.814 fused_ordering(210) 00:14:19.814 fused_ordering(211) 00:14:19.814 fused_ordering(212) 00:14:19.814 fused_ordering(213) 00:14:19.814 fused_ordering(214) 00:14:19.814 fused_ordering(215) 00:14:19.814 fused_ordering(216) 00:14:19.814 fused_ordering(217) 00:14:19.814 fused_ordering(218) 00:14:19.814 fused_ordering(219) 00:14:19.814 fused_ordering(220) 00:14:19.814 fused_ordering(221) 00:14:19.814 fused_ordering(222) 00:14:19.814 fused_ordering(223) 00:14:19.814 fused_ordering(224) 00:14:19.814 fused_ordering(225) 00:14:19.814 fused_ordering(226) 00:14:19.814 fused_ordering(227) 00:14:19.814 fused_ordering(228) 00:14:19.814 fused_ordering(229) 00:14:19.814 fused_ordering(230) 00:14:19.814 fused_ordering(231) 00:14:19.814 fused_ordering(232) 00:14:19.814 fused_ordering(233) 00:14:19.814 fused_ordering(234) 00:14:19.814 fused_ordering(235) 00:14:19.814 fused_ordering(236) 00:14:19.814 fused_ordering(237) 00:14:19.814 fused_ordering(238) 00:14:19.814 fused_ordering(239) 00:14:19.814 fused_ordering(240) 00:14:19.814 fused_ordering(241) 00:14:19.814 fused_ordering(242) 00:14:19.814 fused_ordering(243) 00:14:19.814 fused_ordering(244) 00:14:19.814 fused_ordering(245) 00:14:19.814 fused_ordering(246) 00:14:19.814 fused_ordering(247) 00:14:19.814 fused_ordering(248) 00:14:19.814 fused_ordering(249) 00:14:19.814 fused_ordering(250) 00:14:19.814 fused_ordering(251) 00:14:19.814 fused_ordering(252) 00:14:19.814 fused_ordering(253) 00:14:19.814 fused_ordering(254) 00:14:19.814 fused_ordering(255) 00:14:19.814 fused_ordering(256) 00:14:19.814 fused_ordering(257) 00:14:19.814 fused_ordering(258) 00:14:19.814 fused_ordering(259) 00:14:19.814 fused_ordering(260) 00:14:19.814 fused_ordering(261) 00:14:19.814 fused_ordering(262) 00:14:19.814 fused_ordering(263) 00:14:19.814 fused_ordering(264) 00:14:19.814 fused_ordering(265) 00:14:19.814 fused_ordering(266) 00:14:19.814 fused_ordering(267) 00:14:19.814 fused_ordering(268) 00:14:19.814 fused_ordering(269) 00:14:19.814 fused_ordering(270) 00:14:19.814 fused_ordering(271) 00:14:19.814 fused_ordering(272) 00:14:19.814 fused_ordering(273) 00:14:19.814 fused_ordering(274) 00:14:19.814 fused_ordering(275) 00:14:19.814 fused_ordering(276) 00:14:19.814 fused_ordering(277) 00:14:19.814 fused_ordering(278) 00:14:19.814 fused_ordering(279) 00:14:19.814 fused_ordering(280) 00:14:19.814 fused_ordering(281) 00:14:19.814 fused_ordering(282) 00:14:19.814 fused_ordering(283) 00:14:19.814 fused_ordering(284) 00:14:19.814 fused_ordering(285) 00:14:19.814 fused_ordering(286) 00:14:19.814 fused_ordering(287) 00:14:19.814 fused_ordering(288) 00:14:19.814 fused_ordering(289) 00:14:19.814 fused_ordering(290) 00:14:19.814 fused_ordering(291) 00:14:19.814 fused_ordering(292) 00:14:19.814 fused_ordering(293) 00:14:19.814 fused_ordering(294) 00:14:19.814 fused_ordering(295) 00:14:19.814 fused_ordering(296) 00:14:19.814 fused_ordering(297) 00:14:19.814 fused_ordering(298) 00:14:19.814 fused_ordering(299) 00:14:19.814 fused_ordering(300) 00:14:19.814 fused_ordering(301) 00:14:19.814 fused_ordering(302) 00:14:19.814 fused_ordering(303) 00:14:19.814 fused_ordering(304) 00:14:19.814 fused_ordering(305) 00:14:19.814 fused_ordering(306) 00:14:19.814 fused_ordering(307) 00:14:19.814 fused_ordering(308) 00:14:19.814 fused_ordering(309) 00:14:19.814 fused_ordering(310) 00:14:19.814 fused_ordering(311) 00:14:19.814 fused_ordering(312) 00:14:19.814 fused_ordering(313) 00:14:19.814 fused_ordering(314) 00:14:19.814 fused_ordering(315) 00:14:19.814 fused_ordering(316) 00:14:19.814 fused_ordering(317) 00:14:19.814 fused_ordering(318) 00:14:19.814 fused_ordering(319) 00:14:19.814 fused_ordering(320) 00:14:19.814 fused_ordering(321) 00:14:19.814 fused_ordering(322) 00:14:19.814 fused_ordering(323) 00:14:19.814 fused_ordering(324) 00:14:19.814 fused_ordering(325) 00:14:19.814 fused_ordering(326) 00:14:19.814 fused_ordering(327) 00:14:19.814 fused_ordering(328) 00:14:19.814 fused_ordering(329) 00:14:19.814 fused_ordering(330) 00:14:19.814 fused_ordering(331) 00:14:19.814 fused_ordering(332) 00:14:19.814 fused_ordering(333) 00:14:19.814 fused_ordering(334) 00:14:19.814 fused_ordering(335) 00:14:19.814 fused_ordering(336) 00:14:19.814 fused_ordering(337) 00:14:19.814 fused_ordering(338) 00:14:19.814 fused_ordering(339) 00:14:19.814 fused_ordering(340) 00:14:19.814 fused_ordering(341) 00:14:19.814 fused_ordering(342) 00:14:19.814 fused_ordering(343) 00:14:19.814 fused_ordering(344) 00:14:19.814 fused_ordering(345) 00:14:19.814 fused_ordering(346) 00:14:19.814 fused_ordering(347) 00:14:19.814 fused_ordering(348) 00:14:19.814 fused_ordering(349) 00:14:19.815 fused_ordering(350) 00:14:19.815 fused_ordering(351) 00:14:19.815 fused_ordering(352) 00:14:19.815 fused_ordering(353) 00:14:19.815 fused_ordering(354) 00:14:19.815 fused_ordering(355) 00:14:19.815 fused_ordering(356) 00:14:19.815 fused_ordering(357) 00:14:19.815 fused_ordering(358) 00:14:19.815 fused_ordering(359) 00:14:19.815 fused_ordering(360) 00:14:19.815 fused_ordering(361) 00:14:19.815 fused_ordering(362) 00:14:19.815 fused_ordering(363) 00:14:19.815 fused_ordering(364) 00:14:19.815 fused_ordering(365) 00:14:19.815 fused_ordering(366) 00:14:19.815 fused_ordering(367) 00:14:19.815 fused_ordering(368) 00:14:19.815 fused_ordering(369) 00:14:19.815 fused_ordering(370) 00:14:19.815 fused_ordering(371) 00:14:19.815 fused_ordering(372) 00:14:19.815 fused_ordering(373) 00:14:19.815 fused_ordering(374) 00:14:19.815 fused_ordering(375) 00:14:19.815 fused_ordering(376) 00:14:19.815 fused_ordering(377) 00:14:19.815 fused_ordering(378) 00:14:19.815 fused_ordering(379) 00:14:19.815 fused_ordering(380) 00:14:19.815 fused_ordering(381) 00:14:19.815 fused_ordering(382) 00:14:19.815 fused_ordering(383) 00:14:19.815 fused_ordering(384) 00:14:19.815 fused_ordering(385) 00:14:19.815 fused_ordering(386) 00:14:19.815 fused_ordering(387) 00:14:19.815 fused_ordering(388) 00:14:19.815 fused_ordering(389) 00:14:19.815 fused_ordering(390) 00:14:19.815 fused_ordering(391) 00:14:19.815 fused_ordering(392) 00:14:19.815 fused_ordering(393) 00:14:19.815 fused_ordering(394) 00:14:19.815 fused_ordering(395) 00:14:19.815 fused_ordering(396) 00:14:19.815 fused_ordering(397) 00:14:19.815 fused_ordering(398) 00:14:19.815 fused_ordering(399) 00:14:19.815 fused_ordering(400) 00:14:19.815 fused_ordering(401) 00:14:19.815 fused_ordering(402) 00:14:19.815 fused_ordering(403) 00:14:19.815 fused_ordering(404) 00:14:19.815 fused_ordering(405) 00:14:19.815 fused_ordering(406) 00:14:19.815 fused_ordering(407) 00:14:19.815 fused_ordering(408) 00:14:19.815 fused_ordering(409) 00:14:19.815 fused_ordering(410) 00:14:20.384 fused_ordering(411) 00:14:20.384 fused_ordering(412) 00:14:20.384 fused_ordering(413) 00:14:20.384 fused_ordering(414) 00:14:20.384 fused_ordering(415) 00:14:20.384 fused_ordering(416) 00:14:20.384 fused_ordering(417) 00:14:20.384 fused_ordering(418) 00:14:20.384 fused_ordering(419) 00:14:20.384 fused_ordering(420) 00:14:20.384 fused_ordering(421) 00:14:20.384 fused_ordering(422) 00:14:20.384 fused_ordering(423) 00:14:20.384 fused_ordering(424) 00:14:20.384 fused_ordering(425) 00:14:20.384 fused_ordering(426) 00:14:20.384 fused_ordering(427) 00:14:20.384 fused_ordering(428) 00:14:20.384 fused_ordering(429) 00:14:20.384 fused_ordering(430) 00:14:20.384 fused_ordering(431) 00:14:20.384 fused_ordering(432) 00:14:20.384 fused_ordering(433) 00:14:20.384 fused_ordering(434) 00:14:20.384 fused_ordering(435) 00:14:20.384 fused_ordering(436) 00:14:20.384 fused_ordering(437) 00:14:20.384 fused_ordering(438) 00:14:20.384 fused_ordering(439) 00:14:20.384 fused_ordering(440) 00:14:20.384 fused_ordering(441) 00:14:20.384 fused_ordering(442) 00:14:20.384 fused_ordering(443) 00:14:20.384 fused_ordering(444) 00:14:20.384 fused_ordering(445) 00:14:20.384 fused_ordering(446) 00:14:20.384 fused_ordering(447) 00:14:20.384 fused_ordering(448) 00:14:20.384 fused_ordering(449) 00:14:20.384 fused_ordering(450) 00:14:20.384 fused_ordering(451) 00:14:20.384 fused_ordering(452) 00:14:20.384 fused_ordering(453) 00:14:20.384 fused_ordering(454) 00:14:20.384 fused_ordering(455) 00:14:20.384 fused_ordering(456) 00:14:20.384 fused_ordering(457) 00:14:20.384 fused_ordering(458) 00:14:20.384 fused_ordering(459) 00:14:20.384 fused_ordering(460) 00:14:20.384 fused_ordering(461) 00:14:20.384 fused_ordering(462) 00:14:20.384 fused_ordering(463) 00:14:20.384 fused_ordering(464) 00:14:20.384 fused_ordering(465) 00:14:20.384 fused_ordering(466) 00:14:20.384 fused_ordering(467) 00:14:20.384 fused_ordering(468) 00:14:20.384 fused_ordering(469) 00:14:20.384 fused_ordering(470) 00:14:20.384 fused_ordering(471) 00:14:20.384 fused_ordering(472) 00:14:20.384 fused_ordering(473) 00:14:20.384 fused_ordering(474) 00:14:20.384 fused_ordering(475) 00:14:20.384 fused_ordering(476) 00:14:20.384 fused_ordering(477) 00:14:20.384 fused_ordering(478) 00:14:20.384 fused_ordering(479) 00:14:20.384 fused_ordering(480) 00:14:20.384 fused_ordering(481) 00:14:20.384 fused_ordering(482) 00:14:20.384 fused_ordering(483) 00:14:20.384 fused_ordering(484) 00:14:20.384 fused_ordering(485) 00:14:20.384 fused_ordering(486) 00:14:20.384 fused_ordering(487) 00:14:20.384 fused_ordering(488) 00:14:20.384 fused_ordering(489) 00:14:20.384 fused_ordering(490) 00:14:20.384 fused_ordering(491) 00:14:20.384 fused_ordering(492) 00:14:20.384 fused_ordering(493) 00:14:20.384 fused_ordering(494) 00:14:20.384 fused_ordering(495) 00:14:20.384 fused_ordering(496) 00:14:20.384 fused_ordering(497) 00:14:20.384 fused_ordering(498) 00:14:20.384 fused_ordering(499) 00:14:20.384 fused_ordering(500) 00:14:20.384 fused_ordering(501) 00:14:20.384 fused_ordering(502) 00:14:20.384 fused_ordering(503) 00:14:20.384 fused_ordering(504) 00:14:20.384 fused_ordering(505) 00:14:20.384 fused_ordering(506) 00:14:20.384 fused_ordering(507) 00:14:20.384 fused_ordering(508) 00:14:20.384 fused_ordering(509) 00:14:20.384 fused_ordering(510) 00:14:20.384 fused_ordering(511) 00:14:20.384 fused_ordering(512) 00:14:20.384 fused_ordering(513) 00:14:20.384 fused_ordering(514) 00:14:20.384 fused_ordering(515) 00:14:20.384 fused_ordering(516) 00:14:20.384 fused_ordering(517) 00:14:20.384 fused_ordering(518) 00:14:20.384 fused_ordering(519) 00:14:20.384 fused_ordering(520) 00:14:20.384 fused_ordering(521) 00:14:20.384 fused_ordering(522) 00:14:20.384 fused_ordering(523) 00:14:20.384 fused_ordering(524) 00:14:20.384 fused_ordering(525) 00:14:20.384 fused_ordering(526) 00:14:20.384 fused_ordering(527) 00:14:20.384 fused_ordering(528) 00:14:20.384 fused_ordering(529) 00:14:20.384 fused_ordering(530) 00:14:20.384 fused_ordering(531) 00:14:20.384 fused_ordering(532) 00:14:20.384 fused_ordering(533) 00:14:20.384 fused_ordering(534) 00:14:20.384 fused_ordering(535) 00:14:20.384 fused_ordering(536) 00:14:20.384 fused_ordering(537) 00:14:20.384 fused_ordering(538) 00:14:20.384 fused_ordering(539) 00:14:20.384 fused_ordering(540) 00:14:20.384 fused_ordering(541) 00:14:20.384 fused_ordering(542) 00:14:20.384 fused_ordering(543) 00:14:20.384 fused_ordering(544) 00:14:20.384 fused_ordering(545) 00:14:20.384 fused_ordering(546) 00:14:20.384 fused_ordering(547) 00:14:20.384 fused_ordering(548) 00:14:20.384 fused_ordering(549) 00:14:20.384 fused_ordering(550) 00:14:20.384 fused_ordering(551) 00:14:20.384 fused_ordering(552) 00:14:20.384 fused_ordering(553) 00:14:20.384 fused_ordering(554) 00:14:20.384 fused_ordering(555) 00:14:20.384 fused_ordering(556) 00:14:20.384 fused_ordering(557) 00:14:20.384 fused_ordering(558) 00:14:20.384 fused_ordering(559) 00:14:20.384 fused_ordering(560) 00:14:20.384 fused_ordering(561) 00:14:20.384 fused_ordering(562) 00:14:20.384 fused_ordering(563) 00:14:20.384 fused_ordering(564) 00:14:20.384 fused_ordering(565) 00:14:20.384 fused_ordering(566) 00:14:20.384 fused_ordering(567) 00:14:20.384 fused_ordering(568) 00:14:20.384 fused_ordering(569) 00:14:20.384 fused_ordering(570) 00:14:20.384 fused_ordering(571) 00:14:20.384 fused_ordering(572) 00:14:20.384 fused_ordering(573) 00:14:20.384 fused_ordering(574) 00:14:20.384 fused_ordering(575) 00:14:20.384 fused_ordering(576) 00:14:20.384 fused_ordering(577) 00:14:20.384 fused_ordering(578) 00:14:20.384 fused_ordering(579) 00:14:20.384 fused_ordering(580) 00:14:20.384 fused_ordering(581) 00:14:20.384 fused_ordering(582) 00:14:20.384 fused_ordering(583) 00:14:20.384 fused_ordering(584) 00:14:20.384 fused_ordering(585) 00:14:20.384 fused_ordering(586) 00:14:20.384 fused_ordering(587) 00:14:20.384 fused_ordering(588) 00:14:20.384 fused_ordering(589) 00:14:20.384 fused_ordering(590) 00:14:20.384 fused_ordering(591) 00:14:20.384 fused_ordering(592) 00:14:20.384 fused_ordering(593) 00:14:20.384 fused_ordering(594) 00:14:20.384 fused_ordering(595) 00:14:20.384 fused_ordering(596) 00:14:20.384 fused_ordering(597) 00:14:20.384 fused_ordering(598) 00:14:20.384 fused_ordering(599) 00:14:20.384 fused_ordering(600) 00:14:20.384 fused_ordering(601) 00:14:20.384 fused_ordering(602) 00:14:20.384 fused_ordering(603) 00:14:20.384 fused_ordering(604) 00:14:20.384 fused_ordering(605) 00:14:20.384 fused_ordering(606) 00:14:20.384 fused_ordering(607) 00:14:20.384 fused_ordering(608) 00:14:20.384 fused_ordering(609) 00:14:20.384 fused_ordering(610) 00:14:20.384 fused_ordering(611) 00:14:20.384 fused_ordering(612) 00:14:20.384 fused_ordering(613) 00:14:20.384 fused_ordering(614) 00:14:20.384 fused_ordering(615) 00:14:20.950 fused_ordering(616) 00:14:20.950 fused_ordering(617) 00:14:20.950 fused_ordering(618) 00:14:20.950 fused_ordering(619) 00:14:20.950 fused_ordering(620) 00:14:20.950 fused_ordering(621) 00:14:20.950 fused_ordering(622) 00:14:20.950 fused_ordering(623) 00:14:20.950 fused_ordering(624) 00:14:20.950 fused_ordering(625) 00:14:20.950 fused_ordering(626) 00:14:20.950 fused_ordering(627) 00:14:20.950 fused_ordering(628) 00:14:20.950 fused_ordering(629) 00:14:20.950 fused_ordering(630) 00:14:20.950 fused_ordering(631) 00:14:20.950 fused_ordering(632) 00:14:20.950 fused_ordering(633) 00:14:20.950 fused_ordering(634) 00:14:20.950 fused_ordering(635) 00:14:20.950 fused_ordering(636) 00:14:20.950 fused_ordering(637) 00:14:20.950 fused_ordering(638) 00:14:20.950 fused_ordering(639) 00:14:20.950 fused_ordering(640) 00:14:20.950 fused_ordering(641) 00:14:20.950 fused_ordering(642) 00:14:20.950 fused_ordering(643) 00:14:20.950 fused_ordering(644) 00:14:20.950 fused_ordering(645) 00:14:20.950 fused_ordering(646) 00:14:20.950 fused_ordering(647) 00:14:20.950 fused_ordering(648) 00:14:20.950 fused_ordering(649) 00:14:20.950 fused_ordering(650) 00:14:20.950 fused_ordering(651) 00:14:20.950 fused_ordering(652) 00:14:20.950 fused_ordering(653) 00:14:20.950 fused_ordering(654) 00:14:20.950 fused_ordering(655) 00:14:20.950 fused_ordering(656) 00:14:20.950 fused_ordering(657) 00:14:20.950 fused_ordering(658) 00:14:20.950 fused_ordering(659) 00:14:20.950 fused_ordering(660) 00:14:20.950 fused_ordering(661) 00:14:20.950 fused_ordering(662) 00:14:20.950 fused_ordering(663) 00:14:20.950 fused_ordering(664) 00:14:20.950 fused_ordering(665) 00:14:20.950 fused_ordering(666) 00:14:20.950 fused_ordering(667) 00:14:20.950 fused_ordering(668) 00:14:20.950 fused_ordering(669) 00:14:20.950 fused_ordering(670) 00:14:20.950 fused_ordering(671) 00:14:20.950 fused_ordering(672) 00:14:20.950 fused_ordering(673) 00:14:20.950 fused_ordering(674) 00:14:20.950 fused_ordering(675) 00:14:20.950 fused_ordering(676) 00:14:20.950 fused_ordering(677) 00:14:20.950 fused_ordering(678) 00:14:20.950 fused_ordering(679) 00:14:20.950 fused_ordering(680) 00:14:20.950 fused_ordering(681) 00:14:20.950 fused_ordering(682) 00:14:20.950 fused_ordering(683) 00:14:20.950 fused_ordering(684) 00:14:20.950 fused_ordering(685) 00:14:20.950 fused_ordering(686) 00:14:20.950 fused_ordering(687) 00:14:20.950 fused_ordering(688) 00:14:20.950 fused_ordering(689) 00:14:20.950 fused_ordering(690) 00:14:20.950 fused_ordering(691) 00:14:20.950 fused_ordering(692) 00:14:20.950 fused_ordering(693) 00:14:20.950 fused_ordering(694) 00:14:20.950 fused_ordering(695) 00:14:20.950 fused_ordering(696) 00:14:20.950 fused_ordering(697) 00:14:20.950 fused_ordering(698) 00:14:20.950 fused_ordering(699) 00:14:20.950 fused_ordering(700) 00:14:20.950 fused_ordering(701) 00:14:20.950 fused_ordering(702) 00:14:20.950 fused_ordering(703) 00:14:20.950 fused_ordering(704) 00:14:20.950 fused_ordering(705) 00:14:20.950 fused_ordering(706) 00:14:20.950 fused_ordering(707) 00:14:20.950 fused_ordering(708) 00:14:20.950 fused_ordering(709) 00:14:20.950 fused_ordering(710) 00:14:20.950 fused_ordering(711) 00:14:20.950 fused_ordering(712) 00:14:20.950 fused_ordering(713) 00:14:20.950 fused_ordering(714) 00:14:20.950 fused_ordering(715) 00:14:20.950 fused_ordering(716) 00:14:20.950 fused_ordering(717) 00:14:20.950 fused_ordering(718) 00:14:20.950 fused_ordering(719) 00:14:20.950 fused_ordering(720) 00:14:20.951 fused_ordering(721) 00:14:20.951 fused_ordering(722) 00:14:20.951 fused_ordering(723) 00:14:20.951 fused_ordering(724) 00:14:20.951 fused_ordering(725) 00:14:20.951 fused_ordering(726) 00:14:20.951 fused_ordering(727) 00:14:20.951 fused_ordering(728) 00:14:20.951 fused_ordering(729) 00:14:20.951 fused_ordering(730) 00:14:20.951 fused_ordering(731) 00:14:20.951 fused_ordering(732) 00:14:20.951 fused_ordering(733) 00:14:20.951 fused_ordering(734) 00:14:20.951 fused_ordering(735) 00:14:20.951 fused_ordering(736) 00:14:20.951 fused_ordering(737) 00:14:20.951 fused_ordering(738) 00:14:20.951 fused_ordering(739) 00:14:20.951 fused_ordering(740) 00:14:20.951 fused_ordering(741) 00:14:20.951 fused_ordering(742) 00:14:20.951 fused_ordering(743) 00:14:20.951 fused_ordering(744) 00:14:20.951 fused_ordering(745) 00:14:20.951 fused_ordering(746) 00:14:20.951 fused_ordering(747) 00:14:20.951 fused_ordering(748) 00:14:20.951 fused_ordering(749) 00:14:20.951 fused_ordering(750) 00:14:20.951 fused_ordering(751) 00:14:20.951 fused_ordering(752) 00:14:20.951 fused_ordering(753) 00:14:20.951 fused_ordering(754) 00:14:20.951 fused_ordering(755) 00:14:20.951 fused_ordering(756) 00:14:20.951 fused_ordering(757) 00:14:20.951 fused_ordering(758) 00:14:20.951 fused_ordering(759) 00:14:20.951 fused_ordering(760) 00:14:20.951 fused_ordering(761) 00:14:20.951 fused_ordering(762) 00:14:20.951 fused_ordering(763) 00:14:20.951 fused_ordering(764) 00:14:20.951 fused_ordering(765) 00:14:20.951 fused_ordering(766) 00:14:20.951 fused_ordering(767) 00:14:20.951 fused_ordering(768) 00:14:20.951 fused_ordering(769) 00:14:20.951 fused_ordering(770) 00:14:20.951 fused_ordering(771) 00:14:20.951 fused_ordering(772) 00:14:20.951 fused_ordering(773) 00:14:20.951 fused_ordering(774) 00:14:20.951 fused_ordering(775) 00:14:20.951 fused_ordering(776) 00:14:20.951 fused_ordering(777) 00:14:20.951 fused_ordering(778) 00:14:20.951 fused_ordering(779) 00:14:20.951 fused_ordering(780) 00:14:20.951 fused_ordering(781) 00:14:20.951 fused_ordering(782) 00:14:20.951 fused_ordering(783) 00:14:20.951 fused_ordering(784) 00:14:20.951 fused_ordering(785) 00:14:20.951 fused_ordering(786) 00:14:20.951 fused_ordering(787) 00:14:20.951 fused_ordering(788) 00:14:20.951 fused_ordering(789) 00:14:20.951 fused_ordering(790) 00:14:20.951 fused_ordering(791) 00:14:20.951 fused_ordering(792) 00:14:20.951 fused_ordering(793) 00:14:20.951 fused_ordering(794) 00:14:20.951 fused_ordering(795) 00:14:20.951 fused_ordering(796) 00:14:20.951 fused_ordering(797) 00:14:20.951 fused_ordering(798) 00:14:20.951 fused_ordering(799) 00:14:20.951 fused_ordering(800) 00:14:20.951 fused_ordering(801) 00:14:20.951 fused_ordering(802) 00:14:20.951 fused_ordering(803) 00:14:20.951 fused_ordering(804) 00:14:20.951 fused_ordering(805) 00:14:20.951 fused_ordering(806) 00:14:20.951 fused_ordering(807) 00:14:20.951 fused_ordering(808) 00:14:20.951 fused_ordering(809) 00:14:20.951 fused_ordering(810) 00:14:20.951 fused_ordering(811) 00:14:20.951 fused_ordering(812) 00:14:20.951 fused_ordering(813) 00:14:20.951 fused_ordering(814) 00:14:20.951 fused_ordering(815) 00:14:20.951 fused_ordering(816) 00:14:20.951 fused_ordering(817) 00:14:20.951 fused_ordering(818) 00:14:20.951 fused_ordering(819) 00:14:20.951 fused_ordering(820) 00:14:21.518 fused_ordering(821) 00:14:21.518 fused_ordering(822) 00:14:21.518 fused_ordering(823) 00:14:21.518 fused_ordering(824) 00:14:21.518 fused_ordering(825) 00:14:21.518 fused_ordering(826) 00:14:21.518 fused_ordering(827) 00:14:21.518 fused_ordering(828) 00:14:21.518 fused_ordering(829) 00:14:21.518 fused_ordering(830) 00:14:21.518 fused_ordering(831) 00:14:21.518 fused_ordering(832) 00:14:21.518 fused_ordering(833) 00:14:21.518 fused_ordering(834) 00:14:21.518 fused_ordering(835) 00:14:21.518 fused_ordering(836) 00:14:21.518 fused_ordering(837) 00:14:21.518 fused_ordering(838) 00:14:21.518 fused_ordering(839) 00:14:21.518 fused_ordering(840) 00:14:21.518 fused_ordering(841) 00:14:21.518 fused_ordering(842) 00:14:21.518 fused_ordering(843) 00:14:21.518 fused_ordering(844) 00:14:21.518 fused_ordering(845) 00:14:21.518 fused_ordering(846) 00:14:21.518 fused_ordering(847) 00:14:21.518 fused_ordering(848) 00:14:21.518 fused_ordering(849) 00:14:21.518 fused_ordering(850) 00:14:21.518 fused_ordering(851) 00:14:21.518 fused_ordering(852) 00:14:21.518 fused_ordering(853) 00:14:21.518 fused_ordering(854) 00:14:21.518 fused_ordering(855) 00:14:21.518 fused_ordering(856) 00:14:21.518 fused_ordering(857) 00:14:21.518 fused_ordering(858) 00:14:21.518 fused_ordering(859) 00:14:21.518 fused_ordering(860) 00:14:21.518 fused_ordering(861) 00:14:21.518 fused_ordering(862) 00:14:21.518 fused_ordering(863) 00:14:21.518 fused_ordering(864) 00:14:21.518 fused_ordering(865) 00:14:21.518 fused_ordering(866) 00:14:21.518 fused_ordering(867) 00:14:21.518 fused_ordering(868) 00:14:21.518 fused_ordering(869) 00:14:21.518 fused_ordering(870) 00:14:21.518 fused_ordering(871) 00:14:21.518 fused_ordering(872) 00:14:21.518 fused_ordering(873) 00:14:21.518 fused_ordering(874) 00:14:21.518 fused_ordering(875) 00:14:21.518 fused_ordering(876) 00:14:21.518 fused_ordering(877) 00:14:21.518 fused_ordering(878) 00:14:21.518 fused_ordering(879) 00:14:21.518 fused_ordering(880) 00:14:21.518 fused_ordering(881) 00:14:21.518 fused_ordering(882) 00:14:21.518 fused_ordering(883) 00:14:21.518 fused_ordering(884) 00:14:21.518 fused_ordering(885) 00:14:21.518 fused_ordering(886) 00:14:21.518 fused_ordering(887) 00:14:21.518 fused_ordering(888) 00:14:21.518 fused_ordering(889) 00:14:21.518 fused_ordering(890) 00:14:21.518 fused_ordering(891) 00:14:21.518 fused_ordering(892) 00:14:21.518 fused_ordering(893) 00:14:21.518 fused_ordering(894) 00:14:21.518 fused_ordering(895) 00:14:21.518 fused_ordering(896) 00:14:21.518 fused_ordering(897) 00:14:21.518 fused_ordering(898) 00:14:21.518 fused_ordering(899) 00:14:21.518 fused_ordering(900) 00:14:21.518 fused_ordering(901) 00:14:21.518 fused_ordering(902) 00:14:21.518 fused_ordering(903) 00:14:21.518 fused_ordering(904) 00:14:21.518 fused_ordering(905) 00:14:21.518 fused_ordering(906) 00:14:21.518 fused_ordering(907) 00:14:21.518 fused_ordering(908) 00:14:21.518 fused_ordering(909) 00:14:21.518 fused_ordering(910) 00:14:21.518 fused_ordering(911) 00:14:21.518 fused_ordering(912) 00:14:21.518 fused_ordering(913) 00:14:21.518 fused_ordering(914) 00:14:21.518 fused_ordering(915) 00:14:21.518 fused_ordering(916) 00:14:21.518 fused_ordering(917) 00:14:21.518 fused_ordering(918) 00:14:21.518 fused_ordering(919) 00:14:21.518 fused_ordering(920) 00:14:21.518 fused_ordering(921) 00:14:21.518 fused_ordering(922) 00:14:21.518 fused_ordering(923) 00:14:21.518 fused_ordering(924) 00:14:21.518 fused_ordering(925) 00:14:21.518 fused_ordering(926) 00:14:21.518 fused_ordering(927) 00:14:21.518 fused_ordering(928) 00:14:21.518 fused_ordering(929) 00:14:21.518 fused_ordering(930) 00:14:21.518 fused_ordering(931) 00:14:21.518 fused_ordering(932) 00:14:21.518 fused_ordering(933) 00:14:21.519 fused_ordering(934) 00:14:21.519 fused_ordering(935) 00:14:21.519 fused_ordering(936) 00:14:21.519 fused_ordering(937) 00:14:21.519 fused_ordering(938) 00:14:21.519 fused_ordering(939) 00:14:21.519 fused_ordering(940) 00:14:21.519 fused_ordering(941) 00:14:21.519 fused_ordering(942) 00:14:21.519 fused_ordering(943) 00:14:21.519 fused_ordering(944) 00:14:21.519 fused_ordering(945) 00:14:21.519 fused_ordering(946) 00:14:21.519 fused_ordering(947) 00:14:21.519 fused_ordering(948) 00:14:21.519 fused_ordering(949) 00:14:21.519 fused_ordering(950) 00:14:21.519 fused_ordering(951) 00:14:21.519 fused_ordering(952) 00:14:21.519 fused_ordering(953) 00:14:21.519 fused_ordering(954) 00:14:21.519 fused_ordering(955) 00:14:21.519 fused_ordering(956) 00:14:21.519 fused_ordering(957) 00:14:21.519 fused_ordering(958) 00:14:21.519 fused_ordering(959) 00:14:21.519 fused_ordering(960) 00:14:21.519 fused_ordering(961) 00:14:21.519 fused_ordering(962) 00:14:21.519 fused_ordering(963) 00:14:21.519 fused_ordering(964) 00:14:21.519 fused_ordering(965) 00:14:21.519 fused_ordering(966) 00:14:21.519 fused_ordering(967) 00:14:21.519 fused_ordering(968) 00:14:21.519 fused_ordering(969) 00:14:21.519 fused_ordering(970) 00:14:21.519 fused_ordering(971) 00:14:21.519 fused_ordering(972) 00:14:21.519 fused_ordering(973) 00:14:21.519 fused_ordering(974) 00:14:21.519 fused_ordering(975) 00:14:21.519 fused_ordering(976) 00:14:21.519 fused_ordering(977) 00:14:21.519 fused_ordering(978) 00:14:21.519 fused_ordering(979) 00:14:21.519 fused_ordering(980) 00:14:21.519 fused_ordering(981) 00:14:21.519 fused_ordering(982) 00:14:21.519 fused_ordering(983) 00:14:21.519 fused_ordering(984) 00:14:21.519 fused_ordering(985) 00:14:21.519 fused_ordering(986) 00:14:21.519 fused_ordering(987) 00:14:21.519 fused_ordering(988) 00:14:21.519 fused_ordering(989) 00:14:21.519 fused_ordering(990) 00:14:21.519 fused_ordering(991) 00:14:21.519 fused_ordering(992) 00:14:21.519 fused_ordering(993) 00:14:21.519 fused_ordering(994) 00:14:21.519 fused_ordering(995) 00:14:21.519 fused_ordering(996) 00:14:21.519 fused_ordering(997) 00:14:21.519 fused_ordering(998) 00:14:21.519 fused_ordering(999) 00:14:21.519 fused_ordering(1000) 00:14:21.519 fused_ordering(1001) 00:14:21.519 fused_ordering(1002) 00:14:21.519 fused_ordering(1003) 00:14:21.519 fused_ordering(1004) 00:14:21.519 fused_ordering(1005) 00:14:21.519 fused_ordering(1006) 00:14:21.519 fused_ordering(1007) 00:14:21.519 fused_ordering(1008) 00:14:21.519 fused_ordering(1009) 00:14:21.519 fused_ordering(1010) 00:14:21.519 fused_ordering(1011) 00:14:21.519 fused_ordering(1012) 00:14:21.519 fused_ordering(1013) 00:14:21.519 fused_ordering(1014) 00:14:21.519 fused_ordering(1015) 00:14:21.519 fused_ordering(1016) 00:14:21.519 fused_ordering(1017) 00:14:21.519 fused_ordering(1018) 00:14:21.519 fused_ordering(1019) 00:14:21.519 fused_ordering(1020) 00:14:21.519 fused_ordering(1021) 00:14:21.519 fused_ordering(1022) 00:14:21.519 fused_ordering(1023) 00:14:21.519 10:14:40 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:21.519 10:14:40 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:21.519 10:14:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:21.519 10:14:40 -- nvmf/common.sh@116 -- # sync 00:14:21.519 10:14:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:21.519 10:14:40 -- nvmf/common.sh@119 -- # set +e 00:14:21.519 10:14:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:21.519 10:14:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:21.519 rmmod nvme_tcp 00:14:21.519 rmmod nvme_fabrics 00:14:21.519 rmmod nvme_keyring 00:14:21.519 10:14:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:21.519 10:14:40 -- nvmf/common.sh@123 -- # set -e 00:14:21.519 10:14:40 -- nvmf/common.sh@124 -- # return 0 00:14:21.519 10:14:40 -- nvmf/common.sh@477 -- # '[' -n 81808 ']' 00:14:21.519 10:14:40 -- nvmf/common.sh@478 -- # killprocess 81808 00:14:21.519 10:14:40 -- common/autotest_common.sh@936 -- # '[' -z 81808 ']' 00:14:21.519 10:14:40 -- common/autotest_common.sh@940 -- # kill -0 81808 00:14:21.519 10:14:40 -- common/autotest_common.sh@941 -- # uname 00:14:21.519 10:14:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:21.519 10:14:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81808 00:14:21.519 10:14:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:21.519 10:14:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:21.519 10:14:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81808' 00:14:21.519 killing process with pid 81808 00:14:21.519 10:14:40 -- common/autotest_common.sh@955 -- # kill 81808 00:14:21.519 10:14:40 -- common/autotest_common.sh@960 -- # wait 81808 00:14:21.778 10:14:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:21.778 10:14:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:21.778 10:14:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:21.778 10:14:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.778 10:14:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:21.778 10:14:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.778 10:14:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.778 10:14:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.778 10:14:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:21.778 ************************************ 00:14:21.778 END TEST nvmf_fused_ordering 00:14:21.778 ************************************ 00:14:21.778 00:14:21.778 real 0m4.401s 00:14:21.778 user 0m5.325s 00:14:21.778 sys 0m1.464s 00:14:21.778 10:14:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:21.778 10:14:41 -- common/autotest_common.sh@10 -- # set +x 00:14:21.778 10:14:41 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:21.778 10:14:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:21.778 10:14:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:21.778 10:14:41 -- common/autotest_common.sh@10 -- # set +x 00:14:21.778 ************************************ 00:14:21.778 START TEST nvmf_delete_subsystem 00:14:21.778 ************************************ 00:14:21.778 10:14:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:21.778 * Looking for test storage... 00:14:21.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:21.778 10:14:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:21.778 10:14:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:21.778 10:14:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:22.037 10:14:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:22.037 10:14:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:22.037 10:14:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:22.037 10:14:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:22.037 10:14:41 -- scripts/common.sh@335 -- # IFS=.-: 00:14:22.037 10:14:41 -- scripts/common.sh@335 -- # read -ra ver1 00:14:22.037 10:14:41 -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.037 10:14:41 -- scripts/common.sh@336 -- # read -ra ver2 00:14:22.037 10:14:41 -- scripts/common.sh@337 -- # local 'op=<' 00:14:22.037 10:14:41 -- scripts/common.sh@339 -- # ver1_l=2 00:14:22.037 10:14:41 -- scripts/common.sh@340 -- # ver2_l=1 00:14:22.037 10:14:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:22.037 10:14:41 -- scripts/common.sh@343 -- # case "$op" in 00:14:22.037 10:14:41 -- scripts/common.sh@344 -- # : 1 00:14:22.037 10:14:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:22.037 10:14:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.037 10:14:41 -- scripts/common.sh@364 -- # decimal 1 00:14:22.037 10:14:41 -- scripts/common.sh@352 -- # local d=1 00:14:22.037 10:14:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.037 10:14:41 -- scripts/common.sh@354 -- # echo 1 00:14:22.037 10:14:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:22.037 10:14:41 -- scripts/common.sh@365 -- # decimal 2 00:14:22.037 10:14:41 -- scripts/common.sh@352 -- # local d=2 00:14:22.037 10:14:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.037 10:14:41 -- scripts/common.sh@354 -- # echo 2 00:14:22.037 10:14:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:22.037 10:14:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:22.037 10:14:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:22.037 10:14:41 -- scripts/common.sh@367 -- # return 0 00:14:22.037 10:14:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.037 10:14:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:22.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.037 --rc genhtml_branch_coverage=1 00:14:22.037 --rc genhtml_function_coverage=1 00:14:22.037 --rc genhtml_legend=1 00:14:22.037 --rc geninfo_all_blocks=1 00:14:22.037 --rc geninfo_unexecuted_blocks=1 00:14:22.037 00:14:22.037 ' 00:14:22.037 10:14:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:22.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.037 --rc genhtml_branch_coverage=1 00:14:22.037 --rc genhtml_function_coverage=1 00:14:22.037 --rc genhtml_legend=1 00:14:22.037 --rc geninfo_all_blocks=1 00:14:22.037 --rc geninfo_unexecuted_blocks=1 00:14:22.037 00:14:22.037 ' 00:14:22.037 10:14:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:22.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.037 --rc genhtml_branch_coverage=1 00:14:22.037 --rc genhtml_function_coverage=1 00:14:22.037 --rc genhtml_legend=1 00:14:22.037 --rc geninfo_all_blocks=1 00:14:22.037 --rc geninfo_unexecuted_blocks=1 00:14:22.037 00:14:22.037 ' 00:14:22.037 10:14:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:22.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.037 --rc genhtml_branch_coverage=1 00:14:22.037 --rc genhtml_function_coverage=1 00:14:22.037 --rc genhtml_legend=1 00:14:22.037 --rc geninfo_all_blocks=1 00:14:22.037 --rc geninfo_unexecuted_blocks=1 00:14:22.037 00:14:22.037 ' 00:14:22.037 10:14:41 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:22.037 10:14:41 -- nvmf/common.sh@7 -- # uname -s 00:14:22.037 10:14:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.037 10:14:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.037 10:14:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.037 10:14:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.037 10:14:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.037 10:14:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.037 10:14:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.037 10:14:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.037 10:14:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.037 10:14:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.037 10:14:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:14:22.037 10:14:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:14:22.037 10:14:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.037 10:14:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.037 10:14:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:22.037 10:14:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:22.037 10:14:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.037 10:14:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.037 10:14:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.037 10:14:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.038 10:14:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.038 10:14:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.038 10:14:41 -- paths/export.sh@5 -- # export PATH 00:14:22.038 10:14:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.038 10:14:41 -- nvmf/common.sh@46 -- # : 0 00:14:22.038 10:14:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:22.038 10:14:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:22.038 10:14:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:22.038 10:14:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.038 10:14:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.038 10:14:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:22.038 10:14:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:22.038 10:14:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:22.038 10:14:41 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:22.038 10:14:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:22.038 10:14:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.038 10:14:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:22.038 10:14:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:22.038 10:14:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:22.038 10:14:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.038 10:14:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.038 10:14:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.038 10:14:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:22.038 10:14:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:22.038 10:14:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:22.038 10:14:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:22.038 10:14:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:22.038 10:14:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:22.038 10:14:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:22.038 10:14:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:22.038 10:14:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:22.038 10:14:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:22.038 10:14:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:22.038 10:14:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:22.038 10:14:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:22.038 10:14:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:22.038 10:14:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:22.038 10:14:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:22.038 10:14:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:22.038 10:14:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:22.038 10:14:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:22.038 10:14:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:22.038 Cannot find device "nvmf_tgt_br" 00:14:22.038 10:14:41 -- nvmf/common.sh@154 -- # true 00:14:22.038 10:14:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:22.038 Cannot find device "nvmf_tgt_br2" 00:14:22.038 10:14:41 -- nvmf/common.sh@155 -- # true 00:14:22.038 10:14:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:22.038 10:14:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:22.038 Cannot find device "nvmf_tgt_br" 00:14:22.038 10:14:41 -- nvmf/common.sh@157 -- # true 00:14:22.038 10:14:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:22.038 Cannot find device "nvmf_tgt_br2" 00:14:22.038 10:14:41 -- nvmf/common.sh@158 -- # true 00:14:22.038 10:14:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:22.038 10:14:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:22.038 10:14:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:22.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:22.038 10:14:41 -- nvmf/common.sh@161 -- # true 00:14:22.038 10:14:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:22.038 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:22.038 10:14:41 -- nvmf/common.sh@162 -- # true 00:14:22.038 10:14:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:22.038 10:14:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:22.038 10:14:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:22.296 10:14:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:22.296 10:14:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:22.296 10:14:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:22.297 10:14:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:22.297 10:14:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:22.297 10:14:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:22.297 10:14:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:22.297 10:14:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:22.297 10:14:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:22.297 10:14:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:22.297 10:14:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:22.297 10:14:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:22.297 10:14:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:22.297 10:14:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:22.297 10:14:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:22.297 10:14:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:22.297 10:14:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:22.297 10:14:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:22.297 10:14:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:22.297 10:14:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:22.297 10:14:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:22.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:22.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:14:22.297 00:14:22.297 --- 10.0.0.2 ping statistics --- 00:14:22.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.297 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:14:22.297 10:14:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:22.297 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:22.297 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:14:22.297 00:14:22.297 --- 10.0.0.3 ping statistics --- 00:14:22.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.297 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:22.297 10:14:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:22.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:22.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:14:22.297 00:14:22.297 --- 10.0.0.1 ping statistics --- 00:14:22.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:22.297 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:22.297 10:14:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:22.297 10:14:41 -- nvmf/common.sh@421 -- # return 0 00:14:22.297 10:14:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:22.297 10:14:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:22.297 10:14:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:22.297 10:14:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:22.297 10:14:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:22.297 10:14:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:22.297 10:14:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:22.297 10:14:41 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:22.297 10:14:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:22.297 10:14:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:22.297 10:14:41 -- common/autotest_common.sh@10 -- # set +x 00:14:22.297 10:14:41 -- nvmf/common.sh@469 -- # nvmfpid=82090 00:14:22.297 10:14:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:22.297 10:14:41 -- nvmf/common.sh@470 -- # waitforlisten 82090 00:14:22.297 10:14:41 -- common/autotest_common.sh@829 -- # '[' -z 82090 ']' 00:14:22.297 10:14:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.297 10:14:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:22.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.297 10:14:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.297 10:14:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:22.297 10:14:41 -- common/autotest_common.sh@10 -- # set +x 00:14:22.554 [2024-11-19 10:14:41.855096] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:22.554 [2024-11-19 10:14:41.855201] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.554 [2024-11-19 10:14:41.995424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:22.554 [2024-11-19 10:14:42.038225] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:22.554 [2024-11-19 10:14:42.038463] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.555 [2024-11-19 10:14:42.038482] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.555 [2024-11-19 10:14:42.038496] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.555 [2024-11-19 10:14:42.038626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.555 [2024-11-19 10:14:42.038657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.486 10:14:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.486 10:14:43 -- common/autotest_common.sh@862 -- # return 0 00:14:23.486 10:14:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:23.486 10:14:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:23.486 10:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:23.744 10:14:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.744 10:14:43 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:23.744 10:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.744 10:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:23.744 [2024-11-19 10:14:43.060090] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.744 10:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.744 10:14:43 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:23.744 10:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.744 10:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:23.744 10:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.744 10:14:43 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.744 10:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.744 10:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:23.744 [2024-11-19 10:14:43.076240] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.744 10:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.744 10:14:43 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:23.744 10:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.744 10:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:23.744 NULL1 00:14:23.744 10:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.744 10:14:43 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:23.744 10:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.744 10:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:23.744 Delay0 00:14:23.744 10:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.744 10:14:43 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.744 10:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.744 10:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:23.744 10:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.744 10:14:43 -- target/delete_subsystem.sh@28 -- # perf_pid=82141 00:14:23.744 10:14:43 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:23.744 10:14:43 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:23.744 [2024-11-19 10:14:43.270861] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:25.644 10:14:45 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.644 10:14:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.644 10:14:45 -- common/autotest_common.sh@10 -- # set +x 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Write completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 starting I/O failed: -6 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Write completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 starting I/O failed: -6 00:14:25.904 Write completed with error (sct=0, sc=8) 00:14:25.904 Write completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 starting I/O failed: -6 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Write completed with error (sct=0, sc=8) 00:14:25.904 starting I/O failed: -6 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 starting I/O failed: -6 00:14:25.904 Write completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Write completed with error (sct=0, sc=8) 00:14:25.904 Write completed with error (sct=0, sc=8) 00:14:25.904 starting I/O failed: -6 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 starting I/O failed: -6 00:14:25.904 Write completed with error (sct=0, sc=8) 00:14:25.904 Write completed with error (sct=0, sc=8) 00:14:25.904 Write completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 starting I/O failed: -6 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 starting I/O failed: -6 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Write completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 starting I/O failed: -6 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Write completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Write completed with error (sct=0, sc=8) 00:14:25.904 starting I/O failed: -6 00:14:25.904 Write completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 [2024-11-19 10:14:45.305969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d360 is same with the state(5) to be set 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.904 starting I/O failed: -6 00:14:25.904 Write completed with error (sct=0, sc=8) 00:14:25.904 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 starting I/O failed: -6 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 starting I/O failed: -6 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 starting I/O failed: -6 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 starting I/O failed: -6 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 starting I/O failed: -6 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 starting I/O failed: -6 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 starting I/O failed: -6 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 starting I/O failed: -6 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 [2024-11-19 10:14:45.310089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f010800c350 is same with the state(5) to be set 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 [2024-11-19 10:14:45.310620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d8c0 is same with the state(5) to be set 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Read completed with error (sct=0, sc=8) 00:14:25.905 Write completed with error (sct=0, sc=8) 00:14:26.840 [2024-11-19 10:14:46.285612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1730040 is same with the state(5) to be set 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 [2024-11-19 10:14:46.305795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f010800c600 is same with the state(5) to be set 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 [2024-11-19 10:14:46.306020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f010800bf20 is same with the state(5) to be set 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 [2024-11-19 10:14:46.307310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17302f0 is same with the state(5) to be set 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Write completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 Read completed with error (sct=0, sc=8) 00:14:26.840 [2024-11-19 10:14:46.309102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176d610 is same with the state(5) to be set 00:14:26.840 [2024-11-19 10:14:46.309562] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1730040 (9): Bad file descriptor 00:14:26.840 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:26.840 10:14:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.840 10:14:46 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:26.840 10:14:46 -- target/delete_subsystem.sh@35 -- # kill -0 82141 00:14:26.840 10:14:46 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:26.840 Initializing NVMe Controllers 00:14:26.840 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:26.840 Controller IO queue size 128, less than required. 00:14:26.840 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:26.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:26.840 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:26.840 Initialization complete. Launching workers. 00:14:26.840 ======================================================== 00:14:26.840 Latency(us) 00:14:26.840 Device Information : IOPS MiB/s Average min max 00:14:26.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.84 0.08 888892.14 1245.27 1014284.81 00:14:26.840 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 147.02 0.07 997575.34 3354.93 2001738.22 00:14:26.840 ======================================================== 00:14:26.840 Total : 320.85 0.16 938691.25 1245.27 2001738.22 00:14:26.840 00:14:27.407 10:14:46 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:27.407 10:14:46 -- target/delete_subsystem.sh@35 -- # kill -0 82141 00:14:27.407 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82141) - No such process 00:14:27.407 10:14:46 -- target/delete_subsystem.sh@45 -- # NOT wait 82141 00:14:27.407 10:14:46 -- common/autotest_common.sh@650 -- # local es=0 00:14:27.407 10:14:46 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 82141 00:14:27.407 10:14:46 -- common/autotest_common.sh@638 -- # local arg=wait 00:14:27.407 10:14:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.407 10:14:46 -- common/autotest_common.sh@642 -- # type -t wait 00:14:27.407 10:14:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:27.407 10:14:46 -- common/autotest_common.sh@653 -- # wait 82141 00:14:27.407 10:14:46 -- common/autotest_common.sh@653 -- # es=1 00:14:27.407 10:14:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:27.407 10:14:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:27.407 10:14:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:27.407 10:14:46 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:27.407 10:14:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.407 10:14:46 -- common/autotest_common.sh@10 -- # set +x 00:14:27.407 10:14:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.407 10:14:46 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.407 10:14:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.407 10:14:46 -- common/autotest_common.sh@10 -- # set +x 00:14:27.407 [2024-11-19 10:14:46.832361] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.407 10:14:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.407 10:14:46 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.407 10:14:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.407 10:14:46 -- common/autotest_common.sh@10 -- # set +x 00:14:27.407 10:14:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.407 10:14:46 -- target/delete_subsystem.sh@54 -- # perf_pid=82187 00:14:27.407 10:14:46 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:27.407 10:14:46 -- target/delete_subsystem.sh@57 -- # kill -0 82187 00:14:27.407 10:14:46 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:27.407 10:14:46 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:27.664 [2024-11-19 10:14:47.004893] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:27.922 10:14:47 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:27.922 10:14:47 -- target/delete_subsystem.sh@57 -- # kill -0 82187 00:14:27.922 10:14:47 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:28.489 10:14:47 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:28.489 10:14:47 -- target/delete_subsystem.sh@57 -- # kill -0 82187 00:14:28.489 10:14:47 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:29.056 10:14:48 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:29.056 10:14:48 -- target/delete_subsystem.sh@57 -- # kill -0 82187 00:14:29.056 10:14:48 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:29.622 10:14:48 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:29.622 10:14:48 -- target/delete_subsystem.sh@57 -- # kill -0 82187 00:14:29.622 10:14:48 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:29.879 10:14:49 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:29.880 10:14:49 -- target/delete_subsystem.sh@57 -- # kill -0 82187 00:14:29.880 10:14:49 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:30.445 10:14:49 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:30.445 10:14:49 -- target/delete_subsystem.sh@57 -- # kill -0 82187 00:14:30.445 10:14:49 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:30.722 Initializing NVMe Controllers 00:14:30.722 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:30.722 Controller IO queue size 128, less than required. 00:14:30.722 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:30.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:30.722 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:30.722 Initialization complete. Launching workers. 00:14:30.722 ======================================================== 00:14:30.722 Latency(us) 00:14:30.722 Device Information : IOPS MiB/s Average min max 00:14:30.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003520.16 1000136.44 1014508.88 00:14:30.722 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005547.56 1000237.27 1015138.81 00:14:30.722 ======================================================== 00:14:30.722 Total : 256.00 0.12 1004533.86 1000136.44 1015138.81 00:14:30.722 00:14:30.981 10:14:50 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:30.981 10:14:50 -- target/delete_subsystem.sh@57 -- # kill -0 82187 00:14:30.981 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82187) - No such process 00:14:30.981 10:14:50 -- target/delete_subsystem.sh@67 -- # wait 82187 00:14:30.981 10:14:50 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:30.981 10:14:50 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:30.981 10:14:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:30.981 10:14:50 -- nvmf/common.sh@116 -- # sync 00:14:30.981 10:14:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:30.981 10:14:50 -- nvmf/common.sh@119 -- # set +e 00:14:30.981 10:14:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:30.981 10:14:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:30.981 rmmod nvme_tcp 00:14:30.981 rmmod nvme_fabrics 00:14:30.981 rmmod nvme_keyring 00:14:30.981 10:14:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:30.981 10:14:50 -- nvmf/common.sh@123 -- # set -e 00:14:30.981 10:14:50 -- nvmf/common.sh@124 -- # return 0 00:14:30.981 10:14:50 -- nvmf/common.sh@477 -- # '[' -n 82090 ']' 00:14:30.981 10:14:50 -- nvmf/common.sh@478 -- # killprocess 82090 00:14:30.981 10:14:50 -- common/autotest_common.sh@936 -- # '[' -z 82090 ']' 00:14:30.981 10:14:50 -- common/autotest_common.sh@940 -- # kill -0 82090 00:14:30.981 10:14:50 -- common/autotest_common.sh@941 -- # uname 00:14:30.981 10:14:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:30.981 10:14:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82090 00:14:30.981 killing process with pid 82090 00:14:30.981 10:14:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:30.981 10:14:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:30.981 10:14:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82090' 00:14:30.981 10:14:50 -- common/autotest_common.sh@955 -- # kill 82090 00:14:30.981 10:14:50 -- common/autotest_common.sh@960 -- # wait 82090 00:14:31.241 10:14:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:31.241 10:14:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:31.241 10:14:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:31.241 10:14:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.241 10:14:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:31.241 10:14:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.241 10:14:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.241 10:14:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.241 10:14:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:31.241 00:14:31.241 real 0m9.472s 00:14:31.241 user 0m29.048s 00:14:31.241 sys 0m1.568s 00:14:31.241 10:14:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:31.241 10:14:50 -- common/autotest_common.sh@10 -- # set +x 00:14:31.241 ************************************ 00:14:31.241 END TEST nvmf_delete_subsystem 00:14:31.241 ************************************ 00:14:31.241 10:14:50 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:14:31.241 10:14:50 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:14:31.241 10:14:50 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:31.241 10:14:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:31.241 10:14:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.241 10:14:50 -- common/autotest_common.sh@10 -- # set +x 00:14:31.241 ************************************ 00:14:31.241 START TEST nvmf_host_management 00:14:31.241 ************************************ 00:14:31.241 10:14:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:31.499 * Looking for test storage... 00:14:31.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:31.499 10:14:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:31.499 10:14:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:31.499 10:14:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:31.499 10:14:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:31.499 10:14:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:31.499 10:14:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:31.499 10:14:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:31.499 10:14:50 -- scripts/common.sh@335 -- # IFS=.-: 00:14:31.499 10:14:50 -- scripts/common.sh@335 -- # read -ra ver1 00:14:31.499 10:14:50 -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.499 10:14:50 -- scripts/common.sh@336 -- # read -ra ver2 00:14:31.499 10:14:50 -- scripts/common.sh@337 -- # local 'op=<' 00:14:31.499 10:14:50 -- scripts/common.sh@339 -- # ver1_l=2 00:14:31.499 10:14:50 -- scripts/common.sh@340 -- # ver2_l=1 00:14:31.499 10:14:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:31.499 10:14:50 -- scripts/common.sh@343 -- # case "$op" in 00:14:31.499 10:14:50 -- scripts/common.sh@344 -- # : 1 00:14:31.499 10:14:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:31.499 10:14:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.499 10:14:50 -- scripts/common.sh@364 -- # decimal 1 00:14:31.499 10:14:50 -- scripts/common.sh@352 -- # local d=1 00:14:31.499 10:14:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.499 10:14:50 -- scripts/common.sh@354 -- # echo 1 00:14:31.499 10:14:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:31.499 10:14:50 -- scripts/common.sh@365 -- # decimal 2 00:14:31.499 10:14:50 -- scripts/common.sh@352 -- # local d=2 00:14:31.499 10:14:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.499 10:14:50 -- scripts/common.sh@354 -- # echo 2 00:14:31.499 10:14:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:31.499 10:14:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:31.499 10:14:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:31.499 10:14:50 -- scripts/common.sh@367 -- # return 0 00:14:31.499 10:14:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.499 10:14:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:31.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.499 --rc genhtml_branch_coverage=1 00:14:31.499 --rc genhtml_function_coverage=1 00:14:31.499 --rc genhtml_legend=1 00:14:31.499 --rc geninfo_all_blocks=1 00:14:31.499 --rc geninfo_unexecuted_blocks=1 00:14:31.499 00:14:31.499 ' 00:14:31.499 10:14:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:31.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.499 --rc genhtml_branch_coverage=1 00:14:31.499 --rc genhtml_function_coverage=1 00:14:31.499 --rc genhtml_legend=1 00:14:31.499 --rc geninfo_all_blocks=1 00:14:31.499 --rc geninfo_unexecuted_blocks=1 00:14:31.499 00:14:31.499 ' 00:14:31.499 10:14:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:31.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.499 --rc genhtml_branch_coverage=1 00:14:31.499 --rc genhtml_function_coverage=1 00:14:31.499 --rc genhtml_legend=1 00:14:31.499 --rc geninfo_all_blocks=1 00:14:31.499 --rc geninfo_unexecuted_blocks=1 00:14:31.499 00:14:31.499 ' 00:14:31.499 10:14:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:31.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.499 --rc genhtml_branch_coverage=1 00:14:31.499 --rc genhtml_function_coverage=1 00:14:31.499 --rc genhtml_legend=1 00:14:31.499 --rc geninfo_all_blocks=1 00:14:31.499 --rc geninfo_unexecuted_blocks=1 00:14:31.499 00:14:31.499 ' 00:14:31.499 10:14:50 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:31.499 10:14:50 -- nvmf/common.sh@7 -- # uname -s 00:14:31.499 10:14:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.499 10:14:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.499 10:14:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.499 10:14:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.499 10:14:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.499 10:14:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.499 10:14:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.499 10:14:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.499 10:14:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.499 10:14:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.499 10:14:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:14:31.499 10:14:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:14:31.499 10:14:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.499 10:14:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.499 10:14:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:31.499 10:14:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:31.499 10:14:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.499 10:14:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.499 10:14:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.499 10:14:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.499 10:14:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.499 10:14:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.499 10:14:50 -- paths/export.sh@5 -- # export PATH 00:14:31.499 10:14:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.499 10:14:50 -- nvmf/common.sh@46 -- # : 0 00:14:31.499 10:14:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:31.499 10:14:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:31.499 10:14:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:31.499 10:14:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.499 10:14:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.499 10:14:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:31.499 10:14:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:31.499 10:14:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:31.499 10:14:50 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:31.499 10:14:50 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:31.499 10:14:50 -- target/host_management.sh@104 -- # nvmftestinit 00:14:31.499 10:14:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:31.499 10:14:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.499 10:14:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:31.499 10:14:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:31.499 10:14:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:31.499 10:14:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.500 10:14:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.500 10:14:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.500 10:14:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:31.500 10:14:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:31.500 10:14:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:31.500 10:14:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:31.500 10:14:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:31.500 10:14:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:31.500 10:14:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.500 10:14:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.500 10:14:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:31.500 10:14:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:31.500 10:14:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:31.500 10:14:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:31.500 10:14:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:31.500 10:14:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.500 10:14:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:31.500 10:14:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:31.500 10:14:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:31.500 10:14:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:31.500 10:14:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:31.500 10:14:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:31.500 Cannot find device "nvmf_tgt_br" 00:14:31.500 10:14:50 -- nvmf/common.sh@154 -- # true 00:14:31.500 10:14:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:31.500 Cannot find device "nvmf_tgt_br2" 00:14:31.500 10:14:50 -- nvmf/common.sh@155 -- # true 00:14:31.500 10:14:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:31.500 10:14:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:31.500 Cannot find device "nvmf_tgt_br" 00:14:31.500 10:14:50 -- nvmf/common.sh@157 -- # true 00:14:31.500 10:14:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:31.500 Cannot find device "nvmf_tgt_br2" 00:14:31.500 10:14:50 -- nvmf/common.sh@158 -- # true 00:14:31.500 10:14:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:31.500 10:14:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:31.500 10:14:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:31.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.759 10:14:51 -- nvmf/common.sh@161 -- # true 00:14:31.759 10:14:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:31.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.759 10:14:51 -- nvmf/common.sh@162 -- # true 00:14:31.759 10:14:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:31.759 10:14:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:31.759 10:14:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:31.759 10:14:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:31.759 10:14:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:31.759 10:14:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:31.759 10:14:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:31.759 10:14:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:31.759 10:14:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:31.759 10:14:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:31.759 10:14:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:31.759 10:14:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:31.759 10:14:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:31.759 10:14:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:31.759 10:14:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:31.759 10:14:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:31.759 10:14:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:31.759 10:14:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:31.759 10:14:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:31.759 10:14:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:31.759 10:14:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:31.759 10:14:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:31.759 10:14:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:31.759 10:14:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:31.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:14:31.759 00:14:31.759 --- 10.0.0.2 ping statistics --- 00:14:31.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.759 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:14:31.759 10:14:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:31.759 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:31.759 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:14:31.759 00:14:31.759 --- 10.0.0.3 ping statistics --- 00:14:31.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.759 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:31.759 10:14:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:31.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:31.759 00:14:31.759 --- 10.0.0.1 ping statistics --- 00:14:31.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.759 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:31.759 10:14:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.759 10:14:51 -- nvmf/common.sh@421 -- # return 0 00:14:31.759 10:14:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:31.759 10:14:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.759 10:14:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:31.759 10:14:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:31.759 10:14:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.759 10:14:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:31.759 10:14:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:31.759 10:14:51 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:31.759 10:14:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:31.759 10:14:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.759 10:14:51 -- common/autotest_common.sh@10 -- # set +x 00:14:31.759 ************************************ 00:14:31.759 START TEST nvmf_host_management 00:14:31.759 ************************************ 00:14:31.759 10:14:51 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:14:31.759 10:14:51 -- target/host_management.sh@69 -- # starttarget 00:14:31.759 10:14:51 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:31.759 10:14:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:31.759 10:14:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.759 10:14:51 -- common/autotest_common.sh@10 -- # set +x 00:14:31.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.759 10:14:51 -- nvmf/common.sh@469 -- # nvmfpid=82429 00:14:31.759 10:14:51 -- nvmf/common.sh@470 -- # waitforlisten 82429 00:14:31.759 10:14:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:31.759 10:14:51 -- common/autotest_common.sh@829 -- # '[' -z 82429 ']' 00:14:31.759 10:14:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.759 10:14:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.759 10:14:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.759 10:14:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.759 10:14:51 -- common/autotest_common.sh@10 -- # set +x 00:14:32.018 [2024-11-19 10:14:51.356158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:32.018 [2024-11-19 10:14:51.356304] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.018 [2024-11-19 10:14:51.505286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:32.018 [2024-11-19 10:14:51.541362] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:32.018 [2024-11-19 10:14:51.541684] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.018 [2024-11-19 10:14:51.541802] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.018 [2024-11-19 10:14:51.541941] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.018 [2024-11-19 10:14:51.542141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.018 [2024-11-19 10:14:51.542691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:32.018 [2024-11-19 10:14:51.542797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:32.018 [2024-11-19 10:14:51.542807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.276 10:14:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.276 10:14:51 -- common/autotest_common.sh@862 -- # return 0 00:14:32.276 10:14:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:32.276 10:14:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.276 10:14:51 -- common/autotest_common.sh@10 -- # set +x 00:14:32.276 10:14:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.276 10:14:51 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:32.276 10:14:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.276 10:14:51 -- common/autotest_common.sh@10 -- # set +x 00:14:32.276 [2024-11-19 10:14:51.664642] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.276 10:14:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.276 10:14:51 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:32.276 10:14:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:32.276 10:14:51 -- common/autotest_common.sh@10 -- # set +x 00:14:32.276 10:14:51 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:32.276 10:14:51 -- target/host_management.sh@23 -- # cat 00:14:32.276 10:14:51 -- target/host_management.sh@30 -- # rpc_cmd 00:14:32.276 10:14:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.276 10:14:51 -- common/autotest_common.sh@10 -- # set +x 00:14:32.276 Malloc0 00:14:32.276 [2024-11-19 10:14:51.731599] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.276 10:14:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.276 10:14:51 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:32.276 10:14:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.276 10:14:51 -- common/autotest_common.sh@10 -- # set +x 00:14:32.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:32.276 10:14:51 -- target/host_management.sh@73 -- # perfpid=82492 00:14:32.277 10:14:51 -- target/host_management.sh@74 -- # waitforlisten 82492 /var/tmp/bdevperf.sock 00:14:32.277 10:14:51 -- common/autotest_common.sh@829 -- # '[' -z 82492 ']' 00:14:32.277 10:14:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:32.277 10:14:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.277 10:14:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:32.277 10:14:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.277 10:14:51 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:32.277 10:14:51 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:32.277 10:14:51 -- common/autotest_common.sh@10 -- # set +x 00:14:32.277 10:14:51 -- nvmf/common.sh@520 -- # config=() 00:14:32.277 10:14:51 -- nvmf/common.sh@520 -- # local subsystem config 00:14:32.277 10:14:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:32.277 10:14:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:32.277 { 00:14:32.277 "params": { 00:14:32.277 "name": "Nvme$subsystem", 00:14:32.277 "trtype": "$TEST_TRANSPORT", 00:14:32.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:32.277 "adrfam": "ipv4", 00:14:32.277 "trsvcid": "$NVMF_PORT", 00:14:32.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:32.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:32.277 "hdgst": ${hdgst:-false}, 00:14:32.277 "ddgst": ${ddgst:-false} 00:14:32.277 }, 00:14:32.277 "method": "bdev_nvme_attach_controller" 00:14:32.277 } 00:14:32.277 EOF 00:14:32.277 )") 00:14:32.277 10:14:51 -- nvmf/common.sh@542 -- # cat 00:14:32.277 10:14:51 -- nvmf/common.sh@544 -- # jq . 00:14:32.277 10:14:51 -- nvmf/common.sh@545 -- # IFS=, 00:14:32.277 10:14:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:32.277 "params": { 00:14:32.277 "name": "Nvme0", 00:14:32.277 "trtype": "tcp", 00:14:32.277 "traddr": "10.0.0.2", 00:14:32.277 "adrfam": "ipv4", 00:14:32.277 "trsvcid": "4420", 00:14:32.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:32.277 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:32.277 "hdgst": false, 00:14:32.277 "ddgst": false 00:14:32.277 }, 00:14:32.277 "method": "bdev_nvme_attach_controller" 00:14:32.277 }' 00:14:32.535 [2024-11-19 10:14:51.837211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:32.535 [2024-11-19 10:14:51.837310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82492 ] 00:14:32.535 [2024-11-19 10:14:51.976286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.535 [2024-11-19 10:14:52.013813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.793 Running I/O for 10 seconds... 00:14:33.728 10:14:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.728 10:14:52 -- common/autotest_common.sh@862 -- # return 0 00:14:33.728 10:14:52 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:33.728 10:14:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.728 10:14:52 -- common/autotest_common.sh@10 -- # set +x 00:14:33.728 10:14:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.728 10:14:52 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:33.728 10:14:52 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:33.728 10:14:52 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:33.728 10:14:52 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:33.728 10:14:52 -- target/host_management.sh@52 -- # local ret=1 00:14:33.728 10:14:52 -- target/host_management.sh@53 -- # local i 00:14:33.728 10:14:52 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:33.728 10:14:52 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:33.728 10:14:52 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:33.728 10:14:52 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:33.728 10:14:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.728 10:14:52 -- common/autotest_common.sh@10 -- # set +x 00:14:33.728 10:14:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.728 10:14:53 -- target/host_management.sh@55 -- # read_io_count=2573 00:14:33.728 10:14:53 -- target/host_management.sh@58 -- # '[' 2573 -ge 100 ']' 00:14:33.728 10:14:53 -- target/host_management.sh@59 -- # ret=0 00:14:33.728 10:14:53 -- target/host_management.sh@60 -- # break 00:14:33.728 10:14:53 -- target/host_management.sh@64 -- # return 0 00:14:33.728 10:14:53 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:33.728 10:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.728 10:14:53 -- common/autotest_common.sh@10 -- # set +x 00:14:33.728 [2024-11-19 10:14:53.008719] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.728 [2024-11-19 10:14:53.008782] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.728 [2024-11-19 10:14:53.008796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.728 [2024-11-19 10:14:53.008805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.728 [2024-11-19 10:14:53.008814] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.728 [2024-11-19 10:14:53.008836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.728 [2024-11-19 10:14:53.008845] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008861] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008870] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008878] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008886] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008894] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008902] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008910] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008960] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008968] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008976] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008984] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.008992] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009017] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009034] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009050] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009100] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009126] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009137] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009145] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009153] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009161] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009169] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009177] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009193] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009201] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009218] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009226] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009234] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009242] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.009258] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347530 is same with the state(5) to be set 00:14:33.729 [2024-11-19 10:14:53.010016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.729 [2024-11-19 10:14:53.010082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.729 [2024-11-19 10:14:53.010105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.729 [2024-11-19 10:14:53.010126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.729 [2024-11-19 10:14:53.010147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.729 [2024-11-19 10:14:53.010167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.729 [2024-11-19 10:14:53.010187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.729 [2024-11-19 10:14:53.010207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.729 [2024-11-19 10:14:53.010227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.729 [2024-11-19 10:14:53.010246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.729 [2024-11-19 10:14:53.010266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.729 [2024-11-19 10:14:53.010286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.729 [2024-11-19 10:14:53.010311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.729 [2024-11-19 10:14:53.010331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.729 [2024-11-19 10:14:53.010350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.729 [2024-11-19 10:14:53.010370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.729 [2024-11-19 10:14:53.010390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.729 [2024-11-19 10:14:53.010399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.010980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.010992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.011003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.011012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.011023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.011032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.011043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.011051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.011062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.011071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.011081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.011091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.011101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.011110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.011121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.011130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.011140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.011149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.011160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.011169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.011180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.011190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.011201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.011210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.730 [2024-11-19 10:14:53.011220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.730 [2024-11-19 10:14:53.011229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.731 [2024-11-19 10:14:53.011240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.731 [2024-11-19 10:14:53.011249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.731 [2024-11-19 10:14:53.011260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.731 [2024-11-19 10:14:53.011270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.731 [2024-11-19 10:14:53.011280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.731 [2024-11-19 10:14:53.011289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.731 [2024-11-19 10:14:53.011300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.731 [2024-11-19 10:14:53.011311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.731 [2024-11-19 10:14:53.011322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.731 [2024-11-19 10:14:53.011332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.731 [2024-11-19 10:14:53.011342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.731 [2024-11-19 10:14:53.011352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.731 [2024-11-19 10:14:53.011363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.731 [2024-11-19 10:14:53.011371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.731 [2024-11-19 10:14:53.011382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.731 [2024-11-19 10:14:53.011391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.731 [2024-11-19 10:14:53.011462] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21a77c0 was disconnected and freed. reset controller. 00:14:33.731 [2024-11-19 10:14:53.012626] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:33.731 10:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.731 10:14:53 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:33.731 10:14:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.731 10:14:53 -- common/autotest_common.sh@10 -- # set +x 00:14:33.731 task offset: 83200 on job bdev=Nvme0n1 fails 00:14:33.731 00:14:33.731 Latency(us) 00:14:33.731 [2024-11-19T10:14:53.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.731 [2024-11-19T10:14:53.277Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:33.731 [2024-11-19T10:14:53.277Z] Job: Nvme0n1 ended in about 0.86 seconds with error 00:14:33.731 Verification LBA range: start 0x0 length 0x400 00:14:33.731 Nvme0n1 : 0.86 3099.62 193.73 74.02 0.00 19857.99 1824.58 26214.40 00:14:33.731 [2024-11-19T10:14:53.277Z] =================================================================================================================== 00:14:33.731 [2024-11-19T10:14:53.277Z] Total : 3099.62 193.73 74.02 0.00 19857.99 1824.58 26214.40 00:14:33.731 [2024-11-19 10:14:53.014736] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:33.731 [2024-11-19 10:14:53.014765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e2e0 (9): Bad file descriptor 00:14:33.731 10:14:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.731 10:14:53 -- target/host_management.sh@87 -- # sleep 1 00:14:33.731 [2024-11-19 10:14:53.023965] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:34.666 10:14:54 -- target/host_management.sh@91 -- # kill -9 82492 00:14:34.666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82492) - No such process 00:14:34.666 10:14:54 -- target/host_management.sh@91 -- # true 00:14:34.666 10:14:54 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:34.666 10:14:54 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:34.666 10:14:54 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:34.666 10:14:54 -- nvmf/common.sh@520 -- # config=() 00:14:34.666 10:14:54 -- nvmf/common.sh@520 -- # local subsystem config 00:14:34.666 10:14:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:34.666 10:14:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:34.666 { 00:14:34.666 "params": { 00:14:34.666 "name": "Nvme$subsystem", 00:14:34.666 "trtype": "$TEST_TRANSPORT", 00:14:34.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:34.666 "adrfam": "ipv4", 00:14:34.666 "trsvcid": "$NVMF_PORT", 00:14:34.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:34.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:34.666 "hdgst": ${hdgst:-false}, 00:14:34.666 "ddgst": ${ddgst:-false} 00:14:34.666 }, 00:14:34.666 "method": "bdev_nvme_attach_controller" 00:14:34.666 } 00:14:34.666 EOF 00:14:34.666 )") 00:14:34.666 10:14:54 -- nvmf/common.sh@542 -- # cat 00:14:34.666 10:14:54 -- nvmf/common.sh@544 -- # jq . 00:14:34.666 10:14:54 -- nvmf/common.sh@545 -- # IFS=, 00:14:34.666 10:14:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:34.666 "params": { 00:14:34.666 "name": "Nvme0", 00:14:34.666 "trtype": "tcp", 00:14:34.667 "traddr": "10.0.0.2", 00:14:34.667 "adrfam": "ipv4", 00:14:34.667 "trsvcid": "4420", 00:14:34.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:34.667 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:34.667 "hdgst": false, 00:14:34.667 "ddgst": false 00:14:34.667 }, 00:14:34.667 "method": "bdev_nvme_attach_controller" 00:14:34.667 }' 00:14:34.667 [2024-11-19 10:14:54.085381] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:34.667 [2024-11-19 10:14:54.085500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82538 ] 00:14:34.925 [2024-11-19 10:14:54.224705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.925 [2024-11-19 10:14:54.259793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.925 Running I/O for 1 seconds... 00:14:35.884 00:14:35.884 Latency(us) 00:14:35.884 [2024-11-19T10:14:55.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.884 [2024-11-19T10:14:55.430Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:35.884 Verification LBA range: start 0x0 length 0x400 00:14:35.884 Nvme0n1 : 1.01 3261.65 203.85 0.00 0.00 19265.56 1027.72 25141.99 00:14:35.884 [2024-11-19T10:14:55.430Z] =================================================================================================================== 00:14:35.884 [2024-11-19T10:14:55.430Z] Total : 3261.65 203.85 0.00 0.00 19265.56 1027.72 25141.99 00:14:36.142 10:14:55 -- target/host_management.sh@101 -- # stoptarget 00:14:36.142 10:14:55 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:36.142 10:14:55 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:36.142 10:14:55 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:36.142 10:14:55 -- target/host_management.sh@40 -- # nvmftestfini 00:14:36.142 10:14:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:36.142 10:14:55 -- nvmf/common.sh@116 -- # sync 00:14:36.142 10:14:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:36.142 10:14:55 -- nvmf/common.sh@119 -- # set +e 00:14:36.142 10:14:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:36.142 10:14:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:36.142 rmmod nvme_tcp 00:14:36.142 rmmod nvme_fabrics 00:14:36.142 rmmod nvme_keyring 00:14:36.142 10:14:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:36.142 10:14:55 -- nvmf/common.sh@123 -- # set -e 00:14:36.142 10:14:55 -- nvmf/common.sh@124 -- # return 0 00:14:36.142 10:14:55 -- nvmf/common.sh@477 -- # '[' -n 82429 ']' 00:14:36.142 10:14:55 -- nvmf/common.sh@478 -- # killprocess 82429 00:14:36.142 10:14:55 -- common/autotest_common.sh@936 -- # '[' -z 82429 ']' 00:14:36.142 10:14:55 -- common/autotest_common.sh@940 -- # kill -0 82429 00:14:36.142 10:14:55 -- common/autotest_common.sh@941 -- # uname 00:14:36.142 10:14:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:36.142 10:14:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82429 00:14:36.401 10:14:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:36.401 10:14:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:36.401 killing process with pid 82429 00:14:36.401 10:14:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82429' 00:14:36.401 10:14:55 -- common/autotest_common.sh@955 -- # kill 82429 00:14:36.401 10:14:55 -- common/autotest_common.sh@960 -- # wait 82429 00:14:36.401 [2024-11-19 10:14:55.838715] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:36.401 10:14:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:36.401 10:14:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:36.401 10:14:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:36.401 10:14:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:36.401 10:14:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:36.401 10:14:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.401 10:14:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.401 10:14:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.401 10:14:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:36.401 00:14:36.401 real 0m4.628s 00:14:36.401 user 0m19.564s 00:14:36.401 sys 0m1.160s 00:14:36.401 10:14:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:36.401 10:14:55 -- common/autotest_common.sh@10 -- # set +x 00:14:36.401 ************************************ 00:14:36.401 END TEST nvmf_host_management 00:14:36.401 ************************************ 00:14:36.401 10:14:55 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:36.401 ************************************ 00:14:36.401 END TEST nvmf_host_management 00:14:36.401 ************************************ 00:14:36.401 00:14:36.401 real 0m5.199s 00:14:36.401 user 0m19.755s 00:14:36.401 sys 0m1.409s 00:14:36.401 10:14:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:36.401 10:14:55 -- common/autotest_common.sh@10 -- # set +x 00:14:36.660 10:14:55 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:36.660 10:14:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:36.660 10:14:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:36.660 10:14:55 -- common/autotest_common.sh@10 -- # set +x 00:14:36.660 ************************************ 00:14:36.660 START TEST nvmf_lvol 00:14:36.660 ************************************ 00:14:36.660 10:14:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:36.660 * Looking for test storage... 00:14:36.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:36.660 10:14:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:36.660 10:14:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:36.660 10:14:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:36.660 10:14:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:36.660 10:14:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:36.660 10:14:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:36.660 10:14:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:36.660 10:14:56 -- scripts/common.sh@335 -- # IFS=.-: 00:14:36.660 10:14:56 -- scripts/common.sh@335 -- # read -ra ver1 00:14:36.660 10:14:56 -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.660 10:14:56 -- scripts/common.sh@336 -- # read -ra ver2 00:14:36.660 10:14:56 -- scripts/common.sh@337 -- # local 'op=<' 00:14:36.660 10:14:56 -- scripts/common.sh@339 -- # ver1_l=2 00:14:36.660 10:14:56 -- scripts/common.sh@340 -- # ver2_l=1 00:14:36.660 10:14:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:36.660 10:14:56 -- scripts/common.sh@343 -- # case "$op" in 00:14:36.660 10:14:56 -- scripts/common.sh@344 -- # : 1 00:14:36.660 10:14:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:36.660 10:14:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.660 10:14:56 -- scripts/common.sh@364 -- # decimal 1 00:14:36.660 10:14:56 -- scripts/common.sh@352 -- # local d=1 00:14:36.660 10:14:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.660 10:14:56 -- scripts/common.sh@354 -- # echo 1 00:14:36.660 10:14:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:36.660 10:14:56 -- scripts/common.sh@365 -- # decimal 2 00:14:36.660 10:14:56 -- scripts/common.sh@352 -- # local d=2 00:14:36.660 10:14:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.660 10:14:56 -- scripts/common.sh@354 -- # echo 2 00:14:36.660 10:14:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:36.660 10:14:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:36.660 10:14:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:36.660 10:14:56 -- scripts/common.sh@367 -- # return 0 00:14:36.660 10:14:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.660 10:14:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:36.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.660 --rc genhtml_branch_coverage=1 00:14:36.660 --rc genhtml_function_coverage=1 00:14:36.660 --rc genhtml_legend=1 00:14:36.660 --rc geninfo_all_blocks=1 00:14:36.660 --rc geninfo_unexecuted_blocks=1 00:14:36.660 00:14:36.660 ' 00:14:36.660 10:14:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:36.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.660 --rc genhtml_branch_coverage=1 00:14:36.660 --rc genhtml_function_coverage=1 00:14:36.660 --rc genhtml_legend=1 00:14:36.660 --rc geninfo_all_blocks=1 00:14:36.660 --rc geninfo_unexecuted_blocks=1 00:14:36.660 00:14:36.660 ' 00:14:36.660 10:14:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:36.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.660 --rc genhtml_branch_coverage=1 00:14:36.660 --rc genhtml_function_coverage=1 00:14:36.660 --rc genhtml_legend=1 00:14:36.660 --rc geninfo_all_blocks=1 00:14:36.660 --rc geninfo_unexecuted_blocks=1 00:14:36.660 00:14:36.660 ' 00:14:36.660 10:14:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:36.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.660 --rc genhtml_branch_coverage=1 00:14:36.660 --rc genhtml_function_coverage=1 00:14:36.660 --rc genhtml_legend=1 00:14:36.660 --rc geninfo_all_blocks=1 00:14:36.660 --rc geninfo_unexecuted_blocks=1 00:14:36.660 00:14:36.660 ' 00:14:36.660 10:14:56 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:36.660 10:14:56 -- nvmf/common.sh@7 -- # uname -s 00:14:36.660 10:14:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.660 10:14:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.660 10:14:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.660 10:14:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.660 10:14:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.660 10:14:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.660 10:14:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.660 10:14:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.660 10:14:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.660 10:14:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.660 10:14:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:14:36.660 10:14:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:14:36.660 10:14:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.660 10:14:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.660 10:14:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:36.660 10:14:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:36.660 10:14:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.660 10:14:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.660 10:14:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.660 10:14:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.660 10:14:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.660 10:14:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.660 10:14:56 -- paths/export.sh@5 -- # export PATH 00:14:36.660 10:14:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.660 10:14:56 -- nvmf/common.sh@46 -- # : 0 00:14:36.660 10:14:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:36.660 10:14:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:36.660 10:14:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:36.660 10:14:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.660 10:14:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.660 10:14:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:36.660 10:14:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:36.660 10:14:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:36.660 10:14:56 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:36.660 10:14:56 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:36.660 10:14:56 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:36.660 10:14:56 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:36.660 10:14:56 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:36.660 10:14:56 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:36.660 10:14:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:36.660 10:14:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.660 10:14:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:36.660 10:14:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:36.660 10:14:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:36.660 10:14:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.660 10:14:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.660 10:14:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.660 10:14:56 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:36.660 10:14:56 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:36.660 10:14:56 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:36.661 10:14:56 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:36.661 10:14:56 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:36.661 10:14:56 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:36.661 10:14:56 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.661 10:14:56 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.661 10:14:56 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:36.661 10:14:56 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:36.661 10:14:56 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:36.661 10:14:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:36.661 10:14:56 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:36.661 10:14:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.661 10:14:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:36.661 10:14:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:36.661 10:14:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:36.661 10:14:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:36.661 10:14:56 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:36.661 10:14:56 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:36.919 Cannot find device "nvmf_tgt_br" 00:14:36.919 10:14:56 -- nvmf/common.sh@154 -- # true 00:14:36.919 10:14:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:36.919 Cannot find device "nvmf_tgt_br2" 00:14:36.919 10:14:56 -- nvmf/common.sh@155 -- # true 00:14:36.919 10:14:56 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:36.919 10:14:56 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:36.919 Cannot find device "nvmf_tgt_br" 00:14:36.919 10:14:56 -- nvmf/common.sh@157 -- # true 00:14:36.919 10:14:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:36.919 Cannot find device "nvmf_tgt_br2" 00:14:36.919 10:14:56 -- nvmf/common.sh@158 -- # true 00:14:36.919 10:14:56 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:36.919 10:14:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:36.919 10:14:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:36.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:36.919 10:14:56 -- nvmf/common.sh@161 -- # true 00:14:36.919 10:14:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:36.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:36.919 10:14:56 -- nvmf/common.sh@162 -- # true 00:14:36.919 10:14:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:36.919 10:14:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:36.919 10:14:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:36.919 10:14:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:36.920 10:14:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:36.920 10:14:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:36.920 10:14:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:36.920 10:14:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:36.920 10:14:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:36.920 10:14:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:36.920 10:14:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:36.920 10:14:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:36.920 10:14:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:36.920 10:14:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:36.920 10:14:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:36.920 10:14:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:36.920 10:14:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:36.920 10:14:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:36.920 10:14:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:36.920 10:14:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:36.920 10:14:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:36.920 10:14:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:36.920 10:14:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:36.920 10:14:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:36.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:14:36.920 00:14:36.920 --- 10.0.0.2 ping statistics --- 00:14:36.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.920 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:14:36.920 10:14:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:36.920 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:36.920 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:14:36.920 00:14:36.920 --- 10.0.0.3 ping statistics --- 00:14:36.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.920 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:36.920 10:14:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:36.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:36.920 00:14:36.920 --- 10.0.0.1 ping statistics --- 00:14:36.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.920 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:36.920 10:14:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.920 10:14:56 -- nvmf/common.sh@421 -- # return 0 00:14:36.920 10:14:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:36.920 10:14:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.920 10:14:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:36.920 10:14:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:36.920 10:14:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.920 10:14:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:36.920 10:14:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:37.179 10:14:56 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:37.179 10:14:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:37.179 10:14:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:37.179 10:14:56 -- common/autotest_common.sh@10 -- # set +x 00:14:37.179 10:14:56 -- nvmf/common.sh@469 -- # nvmfpid=82767 00:14:37.179 10:14:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:37.179 10:14:56 -- nvmf/common.sh@470 -- # waitforlisten 82767 00:14:37.179 10:14:56 -- common/autotest_common.sh@829 -- # '[' -z 82767 ']' 00:14:37.179 10:14:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.179 10:14:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:37.179 10:14:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.179 10:14:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:37.179 10:14:56 -- common/autotest_common.sh@10 -- # set +x 00:14:37.179 [2024-11-19 10:14:56.556031] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:37.179 [2024-11-19 10:14:56.556149] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.179 [2024-11-19 10:14:56.705907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:37.436 [2024-11-19 10:14:56.739932] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:37.436 [2024-11-19 10:14:56.740078] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.436 [2024-11-19 10:14:56.740091] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.436 [2024-11-19 10:14:56.740100] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.436 [2024-11-19 10:14:56.740218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.436 [2024-11-19 10:14:56.740579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.436 [2024-11-19 10:14:56.740591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.001 10:14:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:38.001 10:14:57 -- common/autotest_common.sh@862 -- # return 0 00:14:38.001 10:14:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:38.001 10:14:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:38.001 10:14:57 -- common/autotest_common.sh@10 -- # set +x 00:14:38.260 10:14:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.260 10:14:57 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:38.523 [2024-11-19 10:14:57.885083] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.523 10:14:57 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:38.781 10:14:58 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:38.781 10:14:58 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:39.039 10:14:58 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:39.039 10:14:58 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:39.298 10:14:58 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:39.863 10:14:59 -- target/nvmf_lvol.sh@29 -- # lvs=0e9d5cab-27eb-4255-a601-1139b4979912 00:14:39.863 10:14:59 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0e9d5cab-27eb-4255-a601-1139b4979912 lvol 20 00:14:40.121 10:14:59 -- target/nvmf_lvol.sh@32 -- # lvol=b5d245f5-b4fe-4dc5-8233-48a0f383c2f3 00:14:40.121 10:14:59 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:40.379 10:14:59 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b5d245f5-b4fe-4dc5-8233-48a0f383c2f3 00:14:40.637 10:15:00 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:40.895 [2024-11-19 10:15:00.414601] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.895 10:15:00 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:41.463 10:15:00 -- target/nvmf_lvol.sh@42 -- # perf_pid=82918 00:14:41.463 10:15:00 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:41.463 10:15:00 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:42.446 10:15:01 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot b5d245f5-b4fe-4dc5-8233-48a0f383c2f3 MY_SNAPSHOT 00:14:42.704 10:15:02 -- target/nvmf_lvol.sh@47 -- # snapshot=4e9a105f-65a6-45ff-99da-4db98062eee4 00:14:42.704 10:15:02 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize b5d245f5-b4fe-4dc5-8233-48a0f383c2f3 30 00:14:43.271 10:15:02 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 4e9a105f-65a6-45ff-99da-4db98062eee4 MY_CLONE 00:14:43.530 10:15:03 -- target/nvmf_lvol.sh@49 -- # clone=c8be4095-75a9-4fa4-8fc6-b1d609289b30 00:14:43.530 10:15:03 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate c8be4095-75a9-4fa4-8fc6-b1d609289b30 00:14:44.465 10:15:03 -- target/nvmf_lvol.sh@53 -- # wait 82918 00:14:52.594 Initializing NVMe Controllers 00:14:52.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:52.594 Controller IO queue size 128, less than required. 00:14:52.594 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:52.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:52.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:52.594 Initialization complete. Launching workers. 00:14:52.594 ======================================================== 00:14:52.594 Latency(us) 00:14:52.594 Device Information : IOPS MiB/s Average min max 00:14:52.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9511.40 37.15 13465.01 2062.08 224277.36 00:14:52.594 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10333.10 40.36 12396.02 2132.32 74902.72 00:14:52.594 ======================================================== 00:14:52.594 Total : 19844.50 77.52 12908.38 2062.08 224277.36 00:14:52.594 00:14:52.594 10:15:11 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:52.594 10:15:11 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b5d245f5-b4fe-4dc5-8233-48a0f383c2f3 00:14:52.594 10:15:11 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0e9d5cab-27eb-4255-a601-1139b4979912 00:14:52.594 10:15:11 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:52.594 10:15:11 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:52.594 10:15:11 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:52.594 10:15:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:52.594 10:15:11 -- nvmf/common.sh@116 -- # sync 00:14:52.594 10:15:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:52.594 10:15:11 -- nvmf/common.sh@119 -- # set +e 00:14:52.594 10:15:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:52.594 10:15:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:52.594 rmmod nvme_tcp 00:14:52.594 rmmod nvme_fabrics 00:14:52.594 rmmod nvme_keyring 00:14:52.594 10:15:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:52.594 10:15:12 -- nvmf/common.sh@123 -- # set -e 00:14:52.594 10:15:12 -- nvmf/common.sh@124 -- # return 0 00:14:52.594 10:15:12 -- nvmf/common.sh@477 -- # '[' -n 82767 ']' 00:14:52.594 10:15:12 -- nvmf/common.sh@478 -- # killprocess 82767 00:14:52.594 10:15:12 -- common/autotest_common.sh@936 -- # '[' -z 82767 ']' 00:14:52.594 10:15:12 -- common/autotest_common.sh@940 -- # kill -0 82767 00:14:52.594 10:15:12 -- common/autotest_common.sh@941 -- # uname 00:14:52.594 10:15:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:52.594 10:15:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82767 00:14:52.594 killing process with pid 82767 00:14:52.594 10:15:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:52.594 10:15:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:52.594 10:15:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82767' 00:14:52.594 10:15:12 -- common/autotest_common.sh@955 -- # kill 82767 00:14:52.594 10:15:12 -- common/autotest_common.sh@960 -- # wait 82767 00:14:52.852 10:15:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:52.852 10:15:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:52.852 10:15:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:52.852 10:15:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.852 10:15:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:52.852 10:15:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.852 10:15:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.852 10:15:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.852 10:15:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:52.852 00:14:52.853 real 0m16.281s 00:14:52.853 user 1m6.718s 00:14:52.853 sys 0m4.306s 00:14:52.853 10:15:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:52.853 10:15:12 -- common/autotest_common.sh@10 -- # set +x 00:14:52.853 ************************************ 00:14:52.853 END TEST nvmf_lvol 00:14:52.853 ************************************ 00:14:52.853 10:15:12 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:52.853 10:15:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:52.853 10:15:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.853 10:15:12 -- common/autotest_common.sh@10 -- # set +x 00:14:52.853 ************************************ 00:14:52.853 START TEST nvmf_lvs_grow 00:14:52.853 ************************************ 00:14:52.853 10:15:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:52.853 * Looking for test storage... 00:14:52.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:52.853 10:15:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:52.853 10:15:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:52.853 10:15:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:53.112 10:15:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:53.112 10:15:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:53.112 10:15:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:53.112 10:15:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:53.112 10:15:12 -- scripts/common.sh@335 -- # IFS=.-: 00:14:53.112 10:15:12 -- scripts/common.sh@335 -- # read -ra ver1 00:14:53.112 10:15:12 -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.112 10:15:12 -- scripts/common.sh@336 -- # read -ra ver2 00:14:53.112 10:15:12 -- scripts/common.sh@337 -- # local 'op=<' 00:14:53.112 10:15:12 -- scripts/common.sh@339 -- # ver1_l=2 00:14:53.112 10:15:12 -- scripts/common.sh@340 -- # ver2_l=1 00:14:53.112 10:15:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:53.112 10:15:12 -- scripts/common.sh@343 -- # case "$op" in 00:14:53.112 10:15:12 -- scripts/common.sh@344 -- # : 1 00:14:53.112 10:15:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:53.112 10:15:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.112 10:15:12 -- scripts/common.sh@364 -- # decimal 1 00:14:53.112 10:15:12 -- scripts/common.sh@352 -- # local d=1 00:14:53.112 10:15:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.112 10:15:12 -- scripts/common.sh@354 -- # echo 1 00:14:53.112 10:15:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:53.112 10:15:12 -- scripts/common.sh@365 -- # decimal 2 00:14:53.112 10:15:12 -- scripts/common.sh@352 -- # local d=2 00:14:53.112 10:15:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.112 10:15:12 -- scripts/common.sh@354 -- # echo 2 00:14:53.112 10:15:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:53.112 10:15:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:53.112 10:15:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:53.112 10:15:12 -- scripts/common.sh@367 -- # return 0 00:14:53.112 10:15:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.112 10:15:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:53.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.112 --rc genhtml_branch_coverage=1 00:14:53.112 --rc genhtml_function_coverage=1 00:14:53.112 --rc genhtml_legend=1 00:14:53.112 --rc geninfo_all_blocks=1 00:14:53.112 --rc geninfo_unexecuted_blocks=1 00:14:53.112 00:14:53.112 ' 00:14:53.112 10:15:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:53.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.112 --rc genhtml_branch_coverage=1 00:14:53.112 --rc genhtml_function_coverage=1 00:14:53.112 --rc genhtml_legend=1 00:14:53.112 --rc geninfo_all_blocks=1 00:14:53.112 --rc geninfo_unexecuted_blocks=1 00:14:53.112 00:14:53.112 ' 00:14:53.112 10:15:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:53.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.112 --rc genhtml_branch_coverage=1 00:14:53.112 --rc genhtml_function_coverage=1 00:14:53.112 --rc genhtml_legend=1 00:14:53.112 --rc geninfo_all_blocks=1 00:14:53.112 --rc geninfo_unexecuted_blocks=1 00:14:53.112 00:14:53.112 ' 00:14:53.112 10:15:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:53.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.112 --rc genhtml_branch_coverage=1 00:14:53.112 --rc genhtml_function_coverage=1 00:14:53.112 --rc genhtml_legend=1 00:14:53.112 --rc geninfo_all_blocks=1 00:14:53.112 --rc geninfo_unexecuted_blocks=1 00:14:53.112 00:14:53.112 ' 00:14:53.112 10:15:12 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:53.112 10:15:12 -- nvmf/common.sh@7 -- # uname -s 00:14:53.112 10:15:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.112 10:15:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.112 10:15:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.112 10:15:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.112 10:15:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.112 10:15:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.112 10:15:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.112 10:15:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.112 10:15:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.112 10:15:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.112 10:15:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:14:53.112 10:15:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:14:53.112 10:15:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.112 10:15:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.112 10:15:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:53.112 10:15:12 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:53.112 10:15:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.112 10:15:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.112 10:15:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.112 10:15:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.112 10:15:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.113 10:15:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.113 10:15:12 -- paths/export.sh@5 -- # export PATH 00:14:53.113 10:15:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.113 10:15:12 -- nvmf/common.sh@46 -- # : 0 00:14:53.113 10:15:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:53.113 10:15:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:53.113 10:15:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:53.113 10:15:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.113 10:15:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.113 10:15:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:53.113 10:15:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:53.113 10:15:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:53.113 10:15:12 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:53.113 10:15:12 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:53.113 10:15:12 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:53.113 10:15:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:53.113 10:15:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.113 10:15:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:53.113 10:15:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:53.113 10:15:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:53.113 10:15:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.113 10:15:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.113 10:15:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.113 10:15:12 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:53.113 10:15:12 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:53.113 10:15:12 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:53.113 10:15:12 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:53.113 10:15:12 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:53.113 10:15:12 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:53.113 10:15:12 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.113 10:15:12 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.113 10:15:12 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:53.113 10:15:12 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:53.113 10:15:12 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:53.113 10:15:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:53.113 10:15:12 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:53.113 10:15:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.113 10:15:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:53.113 10:15:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:53.113 10:15:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:53.113 10:15:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:53.113 10:15:12 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:53.113 10:15:12 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:53.113 Cannot find device "nvmf_tgt_br" 00:14:53.113 10:15:12 -- nvmf/common.sh@154 -- # true 00:14:53.113 10:15:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:53.113 Cannot find device "nvmf_tgt_br2" 00:14:53.113 10:15:12 -- nvmf/common.sh@155 -- # true 00:14:53.113 10:15:12 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:53.113 10:15:12 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:53.113 Cannot find device "nvmf_tgt_br" 00:14:53.113 10:15:12 -- nvmf/common.sh@157 -- # true 00:14:53.113 10:15:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:53.113 Cannot find device "nvmf_tgt_br2" 00:14:53.113 10:15:12 -- nvmf/common.sh@158 -- # true 00:14:53.113 10:15:12 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:53.113 10:15:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:53.113 10:15:12 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:53.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.113 10:15:12 -- nvmf/common.sh@161 -- # true 00:14:53.113 10:15:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:53.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.113 10:15:12 -- nvmf/common.sh@162 -- # true 00:14:53.113 10:15:12 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:53.113 10:15:12 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:53.113 10:15:12 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:53.113 10:15:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:53.113 10:15:12 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:53.372 10:15:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:53.372 10:15:12 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:53.372 10:15:12 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:53.372 10:15:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:53.372 10:15:12 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:53.372 10:15:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:53.372 10:15:12 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:53.372 10:15:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:53.372 10:15:12 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:53.372 10:15:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:53.372 10:15:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:53.372 10:15:12 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:53.372 10:15:12 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:53.372 10:15:12 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:53.372 10:15:12 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:53.372 10:15:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:53.372 10:15:12 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:53.372 10:15:12 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:53.372 10:15:12 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:53.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:14:53.372 00:14:53.372 --- 10.0.0.2 ping statistics --- 00:14:53.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.372 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:53.372 10:15:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:53.372 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:53.372 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:14:53.372 00:14:53.372 --- 10.0.0.3 ping statistics --- 00:14:53.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.372 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:53.372 10:15:12 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:53.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:14:53.372 00:14:53.372 --- 10.0.0.1 ping statistics --- 00:14:53.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.372 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:53.372 10:15:12 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.372 10:15:12 -- nvmf/common.sh@421 -- # return 0 00:14:53.372 10:15:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:53.372 10:15:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.372 10:15:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:53.372 10:15:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:53.372 10:15:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.372 10:15:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:53.372 10:15:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:53.372 10:15:12 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:53.372 10:15:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:53.372 10:15:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:53.372 10:15:12 -- common/autotest_common.sh@10 -- # set +x 00:14:53.372 10:15:12 -- nvmf/common.sh@469 -- # nvmfpid=83301 00:14:53.372 10:15:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:53.372 10:15:12 -- nvmf/common.sh@470 -- # waitforlisten 83301 00:14:53.372 10:15:12 -- common/autotest_common.sh@829 -- # '[' -z 83301 ']' 00:14:53.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.372 10:15:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.372 10:15:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.372 10:15:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.372 10:15:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.372 10:15:12 -- common/autotest_common.sh@10 -- # set +x 00:14:53.372 [2024-11-19 10:15:12.914109] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:53.372 [2024-11-19 10:15:12.914215] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.632 [2024-11-19 10:15:13.053319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.632 [2024-11-19 10:15:13.086807] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:53.632 [2024-11-19 10:15:13.086964] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.632 [2024-11-19 10:15:13.086980] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.632 [2024-11-19 10:15:13.086988] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.632 [2024-11-19 10:15:13.087020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.568 10:15:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.568 10:15:13 -- common/autotest_common.sh@862 -- # return 0 00:14:54.568 10:15:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:54.568 10:15:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:54.568 10:15:13 -- common/autotest_common.sh@10 -- # set +x 00:14:54.568 10:15:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.568 10:15:13 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:54.827 [2024-11-19 10:15:14.218166] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.827 10:15:14 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:54.827 10:15:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:54.827 10:15:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:54.827 10:15:14 -- common/autotest_common.sh@10 -- # set +x 00:14:54.827 ************************************ 00:14:54.827 START TEST lvs_grow_clean 00:14:54.827 ************************************ 00:14:54.827 10:15:14 -- common/autotest_common.sh@1114 -- # lvs_grow 00:14:54.827 10:15:14 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:54.827 10:15:14 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:54.827 10:15:14 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:54.827 10:15:14 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:54.827 10:15:14 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:54.827 10:15:14 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:54.827 10:15:14 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:54.827 10:15:14 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:54.827 10:15:14 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:55.086 10:15:14 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:55.086 10:15:14 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:55.345 10:15:14 -- target/nvmf_lvs_grow.sh@28 -- # lvs=f8bcf7e0-9ffb-4931-91f1-57f07b35f613 00:14:55.345 10:15:14 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8bcf7e0-9ffb-4931-91f1-57f07b35f613 00:14:55.345 10:15:14 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:55.937 10:15:15 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:55.937 10:15:15 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:55.937 10:15:15 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f8bcf7e0-9ffb-4931-91f1-57f07b35f613 lvol 150 00:14:55.937 10:15:15 -- target/nvmf_lvs_grow.sh@33 -- # lvol=2d890a29-10b9-434b-82d4-ffa134626da4 00:14:55.937 10:15:15 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:55.937 10:15:15 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:56.240 [2024-11-19 10:15:15.666840] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:56.240 [2024-11-19 10:15:15.666926] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:56.240 true 00:14:56.240 10:15:15 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8bcf7e0-9ffb-4931-91f1-57f07b35f613 00:14:56.240 10:15:15 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:56.499 10:15:15 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:56.499 10:15:15 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:56.758 10:15:16 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2d890a29-10b9-434b-82d4-ffa134626da4 00:14:57.325 10:15:16 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:57.583 [2024-11-19 10:15:16.914528] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.583 10:15:16 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:57.842 10:15:17 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:57.842 10:15:17 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83470 00:14:57.842 10:15:17 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:57.842 10:15:17 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83470 /var/tmp/bdevperf.sock 00:14:57.842 10:15:17 -- common/autotest_common.sh@829 -- # '[' -z 83470 ']' 00:14:57.842 10:15:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:57.842 10:15:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.842 10:15:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:57.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:57.842 10:15:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.842 10:15:17 -- common/autotest_common.sh@10 -- # set +x 00:14:57.842 [2024-11-19 10:15:17.308875] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:57.842 [2024-11-19 10:15:17.308974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83470 ] 00:14:58.101 [2024-11-19 10:15:17.445149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.101 [2024-11-19 10:15:17.483950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.101 10:15:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.101 10:15:17 -- common/autotest_common.sh@862 -- # return 0 00:14:58.101 10:15:17 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:58.669 Nvme0n1 00:14:58.669 10:15:17 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:58.928 [ 00:14:58.928 { 00:14:58.928 "aliases": [ 00:14:58.928 "2d890a29-10b9-434b-82d4-ffa134626da4" 00:14:58.928 ], 00:14:58.928 "assigned_rate_limits": { 00:14:58.928 "r_mbytes_per_sec": 0, 00:14:58.928 "rw_ios_per_sec": 0, 00:14:58.928 "rw_mbytes_per_sec": 0, 00:14:58.928 "w_mbytes_per_sec": 0 00:14:58.928 }, 00:14:58.928 "block_size": 4096, 00:14:58.928 "claimed": false, 00:14:58.928 "driver_specific": { 00:14:58.928 "mp_policy": "active_passive", 00:14:58.928 "nvme": [ 00:14:58.928 { 00:14:58.928 "ctrlr_data": { 00:14:58.928 "ana_reporting": false, 00:14:58.928 "cntlid": 1, 00:14:58.928 "firmware_revision": "24.01.1", 00:14:58.928 "model_number": "SPDK bdev Controller", 00:14:58.928 "multi_ctrlr": true, 00:14:58.928 "oacs": { 00:14:58.928 "firmware": 0, 00:14:58.928 "format": 0, 00:14:58.928 "ns_manage": 0, 00:14:58.928 "security": 0 00:14:58.928 }, 00:14:58.928 "serial_number": "SPDK0", 00:14:58.928 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:58.928 "vendor_id": "0x8086" 00:14:58.928 }, 00:14:58.928 "ns_data": { 00:14:58.928 "can_share": true, 00:14:58.928 "id": 1 00:14:58.928 }, 00:14:58.928 "trid": { 00:14:58.928 "adrfam": "IPv4", 00:14:58.928 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:58.928 "traddr": "10.0.0.2", 00:14:58.928 "trsvcid": "4420", 00:14:58.928 "trtype": "TCP" 00:14:58.928 }, 00:14:58.928 "vs": { 00:14:58.928 "nvme_version": "1.3" 00:14:58.928 } 00:14:58.928 } 00:14:58.928 ] 00:14:58.928 }, 00:14:58.928 "name": "Nvme0n1", 00:14:58.928 "num_blocks": 38912, 00:14:58.928 "product_name": "NVMe disk", 00:14:58.928 "supported_io_types": { 00:14:58.928 "abort": true, 00:14:58.928 "compare": true, 00:14:58.928 "compare_and_write": true, 00:14:58.928 "flush": true, 00:14:58.928 "nvme_admin": true, 00:14:58.928 "nvme_io": true, 00:14:58.928 "read": true, 00:14:58.928 "reset": true, 00:14:58.928 "unmap": true, 00:14:58.928 "write": true, 00:14:58.928 "write_zeroes": true 00:14:58.928 }, 00:14:58.928 "uuid": "2d890a29-10b9-434b-82d4-ffa134626da4", 00:14:58.928 "zoned": false 00:14:58.928 } 00:14:58.928 ] 00:14:58.928 10:15:18 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83504 00:14:58.928 10:15:18 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:58.928 10:15:18 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:58.928 Running I/O for 10 seconds... 00:14:59.860 Latency(us) 00:14:59.860 [2024-11-19T10:15:19.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.860 [2024-11-19T10:15:19.406Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.860 Nvme0n1 : 1.00 8172.00 31.92 0.00 0.00 0.00 0.00 0.00 00:14:59.860 [2024-11-19T10:15:19.406Z] =================================================================================================================== 00:14:59.860 [2024-11-19T10:15:19.406Z] Total : 8172.00 31.92 0.00 0.00 0.00 0.00 0.00 00:14:59.860 00:15:00.794 10:15:20 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f8bcf7e0-9ffb-4931-91f1-57f07b35f613 00:15:01.052 [2024-11-19T10:15:20.598Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:01.052 Nvme0n1 : 2.00 8097.00 31.63 0.00 0.00 0.00 0.00 0.00 00:15:01.052 [2024-11-19T10:15:20.598Z] =================================================================================================================== 00:15:01.052 [2024-11-19T10:15:20.598Z] Total : 8097.00 31.63 0.00 0.00 0.00 0.00 0.00 00:15:01.052 00:15:01.052 true 00:15:01.310 10:15:20 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:01.310 10:15:20 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8bcf7e0-9ffb-4931-91f1-57f07b35f613 00:15:01.567 10:15:20 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:01.567 10:15:20 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:01.567 10:15:20 -- target/nvmf_lvs_grow.sh@65 -- # wait 83504 00:15:02.133 [2024-11-19T10:15:21.679Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.133 Nvme0n1 : 3.00 7931.00 30.98 0.00 0.00 0.00 0.00 0.00 00:15:02.133 [2024-11-19T10:15:21.679Z] =================================================================================================================== 00:15:02.133 [2024-11-19T10:15:21.679Z] Total : 7931.00 30.98 0.00 0.00 0.00 0.00 0.00 00:15:02.133 00:15:03.116 [2024-11-19T10:15:22.662Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.116 Nvme0n1 : 4.00 7659.50 29.92 0.00 0.00 0.00 0.00 0.00 00:15:03.116 [2024-11-19T10:15:22.662Z] =================================================================================================================== 00:15:03.116 [2024-11-19T10:15:22.662Z] Total : 7659.50 29.92 0.00 0.00 0.00 0.00 0.00 00:15:03.116 00:15:04.051 [2024-11-19T10:15:23.597Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.051 Nvme0n1 : 5.00 7583.20 29.62 0.00 0.00 0.00 0.00 0.00 00:15:04.051 [2024-11-19T10:15:23.597Z] =================================================================================================================== 00:15:04.051 [2024-11-19T10:15:23.597Z] Total : 7583.20 29.62 0.00 0.00 0.00 0.00 0.00 00:15:04.051 00:15:04.986 [2024-11-19T10:15:24.532Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.986 Nvme0n1 : 6.00 7460.67 29.14 0.00 0.00 0.00 0.00 0.00 00:15:04.986 [2024-11-19T10:15:24.532Z] =================================================================================================================== 00:15:04.986 [2024-11-19T10:15:24.532Z] Total : 7460.67 29.14 0.00 0.00 0.00 0.00 0.00 00:15:04.986 00:15:05.921 [2024-11-19T10:15:25.467Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.921 Nvme0n1 : 7.00 7452.86 29.11 0.00 0.00 0.00 0.00 0.00 00:15:05.921 [2024-11-19T10:15:25.467Z] =================================================================================================================== 00:15:05.921 [2024-11-19T10:15:25.467Z] Total : 7452.86 29.11 0.00 0.00 0.00 0.00 0.00 00:15:05.921 00:15:06.858 [2024-11-19T10:15:26.404Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:06.858 Nvme0n1 : 8.00 7412.88 28.96 0.00 0.00 0.00 0.00 0.00 00:15:06.858 [2024-11-19T10:15:26.404Z] =================================================================================================================== 00:15:06.858 [2024-11-19T10:15:26.404Z] Total : 7412.88 28.96 0.00 0.00 0.00 0.00 0.00 00:15:06.858 00:15:08.235 [2024-11-19T10:15:27.781Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.235 Nvme0n1 : 9.00 7395.33 28.89 0.00 0.00 0.00 0.00 0.00 00:15:08.235 [2024-11-19T10:15:27.781Z] =================================================================================================================== 00:15:08.235 [2024-11-19T10:15:27.781Z] Total : 7395.33 28.89 0.00 0.00 0.00 0.00 0.00 00:15:08.235 00:15:09.193 [2024-11-19T10:15:28.739Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.193 Nvme0n1 : 10.00 7352.20 28.72 0.00 0.00 0.00 0.00 0.00 00:15:09.193 [2024-11-19T10:15:28.739Z] =================================================================================================================== 00:15:09.193 [2024-11-19T10:15:28.739Z] Total : 7352.20 28.72 0.00 0.00 0.00 0.00 0.00 00:15:09.193 00:15:09.193 00:15:09.193 Latency(us) 00:15:09.193 [2024-11-19T10:15:28.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.193 [2024-11-19T10:15:28.739Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.193 Nvme0n1 : 10.01 7360.88 28.75 0.00 0.00 17383.98 2115.03 51237.24 00:15:09.193 [2024-11-19T10:15:28.739Z] =================================================================================================================== 00:15:09.193 [2024-11-19T10:15:28.739Z] Total : 7360.88 28.75 0.00 0.00 17383.98 2115.03 51237.24 00:15:09.193 0 00:15:09.193 10:15:28 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83470 00:15:09.193 10:15:28 -- common/autotest_common.sh@936 -- # '[' -z 83470 ']' 00:15:09.193 10:15:28 -- common/autotest_common.sh@940 -- # kill -0 83470 00:15:09.193 10:15:28 -- common/autotest_common.sh@941 -- # uname 00:15:09.193 10:15:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:09.193 10:15:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83470 00:15:09.193 10:15:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:09.193 10:15:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:09.193 killing process with pid 83470 00:15:09.193 10:15:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83470' 00:15:09.193 Received shutdown signal, test time was about 10.000000 seconds 00:15:09.193 00:15:09.193 Latency(us) 00:15:09.193 [2024-11-19T10:15:28.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.193 [2024-11-19T10:15:28.739Z] =================================================================================================================== 00:15:09.193 [2024-11-19T10:15:28.739Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:09.193 10:15:28 -- common/autotest_common.sh@955 -- # kill 83470 00:15:09.193 10:15:28 -- common/autotest_common.sh@960 -- # wait 83470 00:15:09.193 10:15:28 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:09.469 10:15:28 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8bcf7e0-9ffb-4931-91f1-57f07b35f613 00:15:09.469 10:15:28 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:09.740 10:15:29 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:09.740 10:15:29 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:15:09.740 10:15:29 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:10.000 [2024-11-19 10:15:29.430475] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:10.000 10:15:29 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8bcf7e0-9ffb-4931-91f1-57f07b35f613 00:15:10.000 10:15:29 -- common/autotest_common.sh@650 -- # local es=0 00:15:10.000 10:15:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8bcf7e0-9ffb-4931-91f1-57f07b35f613 00:15:10.000 10:15:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.000 10:15:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.000 10:15:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.000 10:15:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.000 10:15:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.000 10:15:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.000 10:15:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:10.000 10:15:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:10.000 10:15:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8bcf7e0-9ffb-4931-91f1-57f07b35f613 00:15:10.259 2024/11/19 10:15:29 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:f8bcf7e0-9ffb-4931-91f1-57f07b35f613], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:10.259 request: 00:15:10.259 { 00:15:10.259 "method": "bdev_lvol_get_lvstores", 00:15:10.259 "params": { 00:15:10.259 "uuid": "f8bcf7e0-9ffb-4931-91f1-57f07b35f613" 00:15:10.259 } 00:15:10.259 } 00:15:10.259 Got JSON-RPC error response 00:15:10.259 GoRPCClient: error on JSON-RPC call 00:15:10.259 10:15:29 -- common/autotest_common.sh@653 -- # es=1 00:15:10.259 10:15:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:10.259 10:15:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:10.259 10:15:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:10.259 10:15:29 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:10.516 aio_bdev 00:15:10.774 10:15:30 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 2d890a29-10b9-434b-82d4-ffa134626da4 00:15:10.774 10:15:30 -- common/autotest_common.sh@897 -- # local bdev_name=2d890a29-10b9-434b-82d4-ffa134626da4 00:15:10.774 10:15:30 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:10.774 10:15:30 -- common/autotest_common.sh@899 -- # local i 00:15:10.774 10:15:30 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:10.774 10:15:30 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:10.774 10:15:30 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:11.033 10:15:30 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2d890a29-10b9-434b-82d4-ffa134626da4 -t 2000 00:15:11.291 [ 00:15:11.291 { 00:15:11.291 "aliases": [ 00:15:11.291 "lvs/lvol" 00:15:11.291 ], 00:15:11.291 "assigned_rate_limits": { 00:15:11.292 "r_mbytes_per_sec": 0, 00:15:11.292 "rw_ios_per_sec": 0, 00:15:11.292 "rw_mbytes_per_sec": 0, 00:15:11.292 "w_mbytes_per_sec": 0 00:15:11.292 }, 00:15:11.292 "block_size": 4096, 00:15:11.292 "claimed": false, 00:15:11.292 "driver_specific": { 00:15:11.292 "lvol": { 00:15:11.292 "base_bdev": "aio_bdev", 00:15:11.292 "clone": false, 00:15:11.292 "esnap_clone": false, 00:15:11.292 "lvol_store_uuid": "f8bcf7e0-9ffb-4931-91f1-57f07b35f613", 00:15:11.292 "snapshot": false, 00:15:11.292 "thin_provision": false 00:15:11.292 } 00:15:11.292 }, 00:15:11.292 "name": "2d890a29-10b9-434b-82d4-ffa134626da4", 00:15:11.292 "num_blocks": 38912, 00:15:11.292 "product_name": "Logical Volume", 00:15:11.292 "supported_io_types": { 00:15:11.292 "abort": false, 00:15:11.292 "compare": false, 00:15:11.292 "compare_and_write": false, 00:15:11.292 "flush": false, 00:15:11.292 "nvme_admin": false, 00:15:11.292 "nvme_io": false, 00:15:11.292 "read": true, 00:15:11.292 "reset": true, 00:15:11.292 "unmap": true, 00:15:11.292 "write": true, 00:15:11.292 "write_zeroes": true 00:15:11.292 }, 00:15:11.292 "uuid": "2d890a29-10b9-434b-82d4-ffa134626da4", 00:15:11.292 "zoned": false 00:15:11.292 } 00:15:11.292 ] 00:15:11.292 10:15:30 -- common/autotest_common.sh@905 -- # return 0 00:15:11.292 10:15:30 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8bcf7e0-9ffb-4931-91f1-57f07b35f613 00:15:11.292 10:15:30 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:11.858 10:15:31 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:11.858 10:15:31 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8bcf7e0-9ffb-4931-91f1-57f07b35f613 00:15:11.858 10:15:31 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:11.858 10:15:31 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:11.858 10:15:31 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2d890a29-10b9-434b-82d4-ffa134626da4 00:15:12.116 10:15:31 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f8bcf7e0-9ffb-4931-91f1-57f07b35f613 00:15:12.375 10:15:31 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:12.941 10:15:32 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:13.199 00:15:13.199 real 0m18.369s 00:15:13.199 user 0m17.759s 00:15:13.199 sys 0m2.140s 00:15:13.199 10:15:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:13.199 10:15:32 -- common/autotest_common.sh@10 -- # set +x 00:15:13.199 ************************************ 00:15:13.199 END TEST lvs_grow_clean 00:15:13.199 ************************************ 00:15:13.199 10:15:32 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:13.199 10:15:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:13.199 10:15:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:13.199 10:15:32 -- common/autotest_common.sh@10 -- # set +x 00:15:13.199 ************************************ 00:15:13.199 START TEST lvs_grow_dirty 00:15:13.199 ************************************ 00:15:13.199 10:15:32 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:15:13.199 10:15:32 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:13.199 10:15:32 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:13.199 10:15:32 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:13.199 10:15:32 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:13.199 10:15:32 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:13.199 10:15:32 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:13.199 10:15:32 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:13.199 10:15:32 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:13.199 10:15:32 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:13.766 10:15:33 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:13.766 10:15:33 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:14.023 10:15:33 -- target/nvmf_lvs_grow.sh@28 -- # lvs=5e898e4e-091c-42a4-a9b6-b3dd625053eb 00:15:14.023 10:15:33 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e898e4e-091c-42a4-a9b6-b3dd625053eb 00:15:14.023 10:15:33 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:14.281 10:15:33 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:14.281 10:15:33 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:14.281 10:15:33 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5e898e4e-091c-42a4-a9b6-b3dd625053eb lvol 150 00:15:14.539 10:15:33 -- target/nvmf_lvs_grow.sh@33 -- # lvol=cdf180b6-d9c7-4867-a78b-d70fb7bbd36e 00:15:14.539 10:15:33 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:14.539 10:15:33 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:14.797 [2024-11-19 10:15:34.298853] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:14.797 [2024-11-19 10:15:34.298948] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:14.797 true 00:15:14.797 10:15:34 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e898e4e-091c-42a4-a9b6-b3dd625053eb 00:15:14.797 10:15:34 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:15.382 10:15:34 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:15.382 10:15:34 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:15.647 10:15:35 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cdf180b6-d9c7-4867-a78b-d70fb7bbd36e 00:15:15.905 10:15:35 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:16.163 10:15:35 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:16.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.422 10:15:35 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83902 00:15:16.422 10:15:35 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:16.422 10:15:35 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:16.422 10:15:35 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83902 /var/tmp/bdevperf.sock 00:15:16.422 10:15:35 -- common/autotest_common.sh@829 -- # '[' -z 83902 ']' 00:15:16.422 10:15:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.422 10:15:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.422 10:15:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.422 10:15:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.422 10:15:35 -- common/autotest_common.sh@10 -- # set +x 00:15:16.422 [2024-11-19 10:15:35.865232] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:16.422 [2024-11-19 10:15:35.865328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83902 ] 00:15:16.681 [2024-11-19 10:15:35.996611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.681 [2024-11-19 10:15:36.040130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.614 10:15:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.614 10:15:36 -- common/autotest_common.sh@862 -- # return 0 00:15:17.614 10:15:36 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:17.873 Nvme0n1 00:15:17.873 10:15:37 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:18.440 [ 00:15:18.440 { 00:15:18.440 "aliases": [ 00:15:18.440 "cdf180b6-d9c7-4867-a78b-d70fb7bbd36e" 00:15:18.440 ], 00:15:18.440 "assigned_rate_limits": { 00:15:18.440 "r_mbytes_per_sec": 0, 00:15:18.440 "rw_ios_per_sec": 0, 00:15:18.440 "rw_mbytes_per_sec": 0, 00:15:18.440 "w_mbytes_per_sec": 0 00:15:18.440 }, 00:15:18.440 "block_size": 4096, 00:15:18.440 "claimed": false, 00:15:18.440 "driver_specific": { 00:15:18.440 "mp_policy": "active_passive", 00:15:18.440 "nvme": [ 00:15:18.440 { 00:15:18.440 "ctrlr_data": { 00:15:18.440 "ana_reporting": false, 00:15:18.440 "cntlid": 1, 00:15:18.440 "firmware_revision": "24.01.1", 00:15:18.440 "model_number": "SPDK bdev Controller", 00:15:18.440 "multi_ctrlr": true, 00:15:18.440 "oacs": { 00:15:18.440 "firmware": 0, 00:15:18.440 "format": 0, 00:15:18.440 "ns_manage": 0, 00:15:18.440 "security": 0 00:15:18.440 }, 00:15:18.440 "serial_number": "SPDK0", 00:15:18.440 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:18.440 "vendor_id": "0x8086" 00:15:18.440 }, 00:15:18.440 "ns_data": { 00:15:18.440 "can_share": true, 00:15:18.440 "id": 1 00:15:18.440 }, 00:15:18.440 "trid": { 00:15:18.440 "adrfam": "IPv4", 00:15:18.440 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:18.440 "traddr": "10.0.0.2", 00:15:18.440 "trsvcid": "4420", 00:15:18.440 "trtype": "TCP" 00:15:18.440 }, 00:15:18.440 "vs": { 00:15:18.440 "nvme_version": "1.3" 00:15:18.440 } 00:15:18.440 } 00:15:18.440 ] 00:15:18.440 }, 00:15:18.440 "name": "Nvme0n1", 00:15:18.440 "num_blocks": 38912, 00:15:18.440 "product_name": "NVMe disk", 00:15:18.440 "supported_io_types": { 00:15:18.440 "abort": true, 00:15:18.440 "compare": true, 00:15:18.440 "compare_and_write": true, 00:15:18.440 "flush": true, 00:15:18.440 "nvme_admin": true, 00:15:18.440 "nvme_io": true, 00:15:18.440 "read": true, 00:15:18.440 "reset": true, 00:15:18.440 "unmap": true, 00:15:18.440 "write": true, 00:15:18.440 "write_zeroes": true 00:15:18.440 }, 00:15:18.440 "uuid": "cdf180b6-d9c7-4867-a78b-d70fb7bbd36e", 00:15:18.440 "zoned": false 00:15:18.440 } 00:15:18.440 ] 00:15:18.440 10:15:37 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83957 00:15:18.440 10:15:37 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:18.440 10:15:37 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:18.440 Running I/O for 10 seconds... 00:15:19.374 Latency(us) 00:15:19.374 [2024-11-19T10:15:38.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.375 [2024-11-19T10:15:38.921Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.375 Nvme0n1 : 1.00 7605.00 29.71 0.00 0.00 0.00 0.00 0.00 00:15:19.375 [2024-11-19T10:15:38.921Z] =================================================================================================================== 00:15:19.375 [2024-11-19T10:15:38.921Z] Total : 7605.00 29.71 0.00 0.00 0.00 0.00 0.00 00:15:19.375 00:15:20.308 10:15:39 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5e898e4e-091c-42a4-a9b6-b3dd625053eb 00:15:20.566 [2024-11-19T10:15:40.112Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:20.566 Nvme0n1 : 2.00 7655.00 29.90 0.00 0.00 0.00 0.00 0.00 00:15:20.566 [2024-11-19T10:15:40.112Z] =================================================================================================================== 00:15:20.566 [2024-11-19T10:15:40.112Z] Total : 7655.00 29.90 0.00 0.00 0.00 0.00 0.00 00:15:20.566 00:15:20.566 true 00:15:20.566 10:15:40 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e898e4e-091c-42a4-a9b6-b3dd625053eb 00:15:20.566 10:15:40 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:20.824 10:15:40 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:20.824 10:15:40 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:20.824 10:15:40 -- target/nvmf_lvs_grow.sh@65 -- # wait 83957 00:15:21.391 [2024-11-19T10:15:40.937Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.391 Nvme0n1 : 3.00 7414.00 28.96 0.00 0.00 0.00 0.00 0.00 00:15:21.391 [2024-11-19T10:15:40.937Z] =================================================================================================================== 00:15:21.391 [2024-11-19T10:15:40.937Z] Total : 7414.00 28.96 0.00 0.00 0.00 0.00 0.00 00:15:21.391 00:15:22.766 [2024-11-19T10:15:42.312Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.766 Nvme0n1 : 4.00 7200.25 28.13 0.00 0.00 0.00 0.00 0.00 00:15:22.766 [2024-11-19T10:15:42.312Z] =================================================================================================================== 00:15:22.766 [2024-11-19T10:15:42.312Z] Total : 7200.25 28.13 0.00 0.00 0.00 0.00 0.00 00:15:22.766 00:15:23.700 [2024-11-19T10:15:43.246Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.701 Nvme0n1 : 5.00 7097.80 27.73 0.00 0.00 0.00 0.00 0.00 00:15:23.701 [2024-11-19T10:15:43.247Z] =================================================================================================================== 00:15:23.701 [2024-11-19T10:15:43.247Z] Total : 7097.80 27.73 0.00 0.00 0.00 0.00 0.00 00:15:23.701 00:15:24.637 [2024-11-19T10:15:44.183Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.637 Nvme0n1 : 6.00 7037.00 27.49 0.00 0.00 0.00 0.00 0.00 00:15:24.637 [2024-11-19T10:15:44.183Z] =================================================================================================================== 00:15:24.637 [2024-11-19T10:15:44.183Z] Total : 7037.00 27.49 0.00 0.00 0.00 0.00 0.00 00:15:24.637 00:15:25.572 [2024-11-19T10:15:45.118Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.572 Nvme0n1 : 7.00 7028.43 27.45 0.00 0.00 0.00 0.00 0.00 00:15:25.572 [2024-11-19T10:15:45.118Z] =================================================================================================================== 00:15:25.572 [2024-11-19T10:15:45.118Z] Total : 7028.43 27.45 0.00 0.00 0.00 0.00 0.00 00:15:25.572 00:15:26.507 [2024-11-19T10:15:46.053Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:26.507 Nvme0n1 : 8.00 7042.50 27.51 0.00 0.00 0.00 0.00 0.00 00:15:26.507 [2024-11-19T10:15:46.053Z] =================================================================================================================== 00:15:26.507 [2024-11-19T10:15:46.053Z] Total : 7042.50 27.51 0.00 0.00 0.00 0.00 0.00 00:15:26.507 00:15:27.442 [2024-11-19T10:15:46.988Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:27.442 Nvme0n1 : 9.00 7045.89 27.52 0.00 0.00 0.00 0.00 0.00 00:15:27.442 [2024-11-19T10:15:46.988Z] =================================================================================================================== 00:15:27.442 [2024-11-19T10:15:46.988Z] Total : 7045.89 27.52 0.00 0.00 0.00 0.00 0.00 00:15:27.442 00:15:28.380 [2024-11-19T10:15:47.926Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:28.380 Nvme0n1 : 10.00 6980.20 27.27 0.00 0.00 0.00 0.00 0.00 00:15:28.380 [2024-11-19T10:15:47.926Z] =================================================================================================================== 00:15:28.380 [2024-11-19T10:15:47.926Z] Total : 6980.20 27.27 0.00 0.00 0.00 0.00 0.00 00:15:28.380 00:15:28.639 00:15:28.639 Latency(us) 00:15:28.639 [2024-11-19T10:15:48.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.639 [2024-11-19T10:15:48.185Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:28.639 Nvme0n1 : 10.01 6985.34 27.29 0.00 0.00 18317.93 3902.37 96278.34 00:15:28.639 [2024-11-19T10:15:48.185Z] =================================================================================================================== 00:15:28.639 [2024-11-19T10:15:48.185Z] Total : 6985.34 27.29 0.00 0.00 18317.93 3902.37 96278.34 00:15:28.639 0 00:15:28.639 10:15:47 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83902 00:15:28.639 10:15:47 -- common/autotest_common.sh@936 -- # '[' -z 83902 ']' 00:15:28.639 10:15:47 -- common/autotest_common.sh@940 -- # kill -0 83902 00:15:28.639 10:15:47 -- common/autotest_common.sh@941 -- # uname 00:15:28.639 10:15:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:28.639 10:15:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83902 00:15:28.639 10:15:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:28.639 killing process with pid 83902 00:15:28.639 10:15:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:28.639 10:15:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83902' 00:15:28.639 10:15:47 -- common/autotest_common.sh@955 -- # kill 83902 00:15:28.639 Received shutdown signal, test time was about 10.000000 seconds 00:15:28.639 00:15:28.639 Latency(us) 00:15:28.639 [2024-11-19T10:15:48.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.639 [2024-11-19T10:15:48.185Z] =================================================================================================================== 00:15:28.639 [2024-11-19T10:15:48.185Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:28.639 10:15:47 -- common/autotest_common.sh@960 -- # wait 83902 00:15:28.639 10:15:48 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:29.205 10:15:48 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e898e4e-091c-42a4-a9b6-b3dd625053eb 00:15:29.205 10:15:48 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:29.463 10:15:48 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:29.463 10:15:48 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:29.463 10:15:48 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83301 00:15:29.463 10:15:48 -- target/nvmf_lvs_grow.sh@74 -- # wait 83301 00:15:29.463 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83301 Killed "${NVMF_APP[@]}" "$@" 00:15:29.463 10:15:48 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:29.463 10:15:48 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:29.463 10:15:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:29.463 10:15:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:29.463 10:15:48 -- common/autotest_common.sh@10 -- # set +x 00:15:29.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.463 10:15:48 -- nvmf/common.sh@469 -- # nvmfpid=84121 00:15:29.463 10:15:48 -- nvmf/common.sh@470 -- # waitforlisten 84121 00:15:29.463 10:15:48 -- common/autotest_common.sh@829 -- # '[' -z 84121 ']' 00:15:29.463 10:15:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.463 10:15:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:29.463 10:15:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.463 10:15:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.463 10:15:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.463 10:15:48 -- common/autotest_common.sh@10 -- # set +x 00:15:29.463 [2024-11-19 10:15:48.934333] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:29.463 [2024-11-19 10:15:48.934841] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.722 [2024-11-19 10:15:49.068772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.722 [2024-11-19 10:15:49.104034] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:29.722 [2024-11-19 10:15:49.104545] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.722 [2024-11-19 10:15:49.104700] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.722 [2024-11-19 10:15:49.104876] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.722 [2024-11-19 10:15:49.105020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.655 10:15:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.655 10:15:50 -- common/autotest_common.sh@862 -- # return 0 00:15:30.655 10:15:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:30.655 10:15:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:30.655 10:15:50 -- common/autotest_common.sh@10 -- # set +x 00:15:30.655 10:15:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.655 10:15:50 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:31.221 [2024-11-19 10:15:50.468731] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:31.221 [2024-11-19 10:15:50.469166] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:31.221 [2024-11-19 10:15:50.469434] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:31.221 10:15:50 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:31.221 10:15:50 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev cdf180b6-d9c7-4867-a78b-d70fb7bbd36e 00:15:31.221 10:15:50 -- common/autotest_common.sh@897 -- # local bdev_name=cdf180b6-d9c7-4867-a78b-d70fb7bbd36e 00:15:31.221 10:15:50 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:31.221 10:15:50 -- common/autotest_common.sh@899 -- # local i 00:15:31.221 10:15:50 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:31.221 10:15:50 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:31.221 10:15:50 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:31.479 10:15:50 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cdf180b6-d9c7-4867-a78b-d70fb7bbd36e -t 2000 00:15:31.738 [ 00:15:31.738 { 00:15:31.738 "aliases": [ 00:15:31.738 "lvs/lvol" 00:15:31.738 ], 00:15:31.738 "assigned_rate_limits": { 00:15:31.738 "r_mbytes_per_sec": 0, 00:15:31.738 "rw_ios_per_sec": 0, 00:15:31.738 "rw_mbytes_per_sec": 0, 00:15:31.738 "w_mbytes_per_sec": 0 00:15:31.738 }, 00:15:31.738 "block_size": 4096, 00:15:31.738 "claimed": false, 00:15:31.738 "driver_specific": { 00:15:31.738 "lvol": { 00:15:31.738 "base_bdev": "aio_bdev", 00:15:31.738 "clone": false, 00:15:31.738 "esnap_clone": false, 00:15:31.738 "lvol_store_uuid": "5e898e4e-091c-42a4-a9b6-b3dd625053eb", 00:15:31.738 "snapshot": false, 00:15:31.738 "thin_provision": false 00:15:31.738 } 00:15:31.738 }, 00:15:31.738 "name": "cdf180b6-d9c7-4867-a78b-d70fb7bbd36e", 00:15:31.738 "num_blocks": 38912, 00:15:31.738 "product_name": "Logical Volume", 00:15:31.738 "supported_io_types": { 00:15:31.738 "abort": false, 00:15:31.738 "compare": false, 00:15:31.738 "compare_and_write": false, 00:15:31.738 "flush": false, 00:15:31.738 "nvme_admin": false, 00:15:31.738 "nvme_io": false, 00:15:31.738 "read": true, 00:15:31.738 "reset": true, 00:15:31.738 "unmap": true, 00:15:31.738 "write": true, 00:15:31.738 "write_zeroes": true 00:15:31.738 }, 00:15:31.738 "uuid": "cdf180b6-d9c7-4867-a78b-d70fb7bbd36e", 00:15:31.738 "zoned": false 00:15:31.738 } 00:15:31.738 ] 00:15:31.738 10:15:51 -- common/autotest_common.sh@905 -- # return 0 00:15:31.738 10:15:51 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e898e4e-091c-42a4-a9b6-b3dd625053eb 00:15:31.738 10:15:51 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:32.304 10:15:51 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:32.304 10:15:51 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e898e4e-091c-42a4-a9b6-b3dd625053eb 00:15:32.304 10:15:51 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:32.562 10:15:52 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:32.562 10:15:52 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:33.138 [2024-11-19 10:15:52.374961] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:33.138 10:15:52 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e898e4e-091c-42a4-a9b6-b3dd625053eb 00:15:33.138 10:15:52 -- common/autotest_common.sh@650 -- # local es=0 00:15:33.138 10:15:52 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e898e4e-091c-42a4-a9b6-b3dd625053eb 00:15:33.138 10:15:52 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:33.138 10:15:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:33.138 10:15:52 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:33.138 10:15:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:33.138 10:15:52 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:33.138 10:15:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:33.138 10:15:52 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:33.138 10:15:52 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:33.138 10:15:52 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e898e4e-091c-42a4-a9b6-b3dd625053eb 00:15:33.431 2024/11/19 10:15:52 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:5e898e4e-091c-42a4-a9b6-b3dd625053eb], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:15:33.431 request: 00:15:33.431 { 00:15:33.431 "method": "bdev_lvol_get_lvstores", 00:15:33.431 "params": { 00:15:33.431 "uuid": "5e898e4e-091c-42a4-a9b6-b3dd625053eb" 00:15:33.431 } 00:15:33.431 } 00:15:33.431 Got JSON-RPC error response 00:15:33.431 GoRPCClient: error on JSON-RPC call 00:15:33.431 10:15:52 -- common/autotest_common.sh@653 -- # es=1 00:15:33.431 10:15:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:33.431 10:15:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:33.431 10:15:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:33.431 10:15:52 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:33.689 aio_bdev 00:15:33.689 10:15:53 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev cdf180b6-d9c7-4867-a78b-d70fb7bbd36e 00:15:33.689 10:15:53 -- common/autotest_common.sh@897 -- # local bdev_name=cdf180b6-d9c7-4867-a78b-d70fb7bbd36e 00:15:33.689 10:15:53 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:33.689 10:15:53 -- common/autotest_common.sh@899 -- # local i 00:15:33.689 10:15:53 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:33.689 10:15:53 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:33.689 10:15:53 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:33.947 10:15:53 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cdf180b6-d9c7-4867-a78b-d70fb7bbd36e -t 2000 00:15:34.514 [ 00:15:34.514 { 00:15:34.514 "aliases": [ 00:15:34.514 "lvs/lvol" 00:15:34.514 ], 00:15:34.514 "assigned_rate_limits": { 00:15:34.514 "r_mbytes_per_sec": 0, 00:15:34.514 "rw_ios_per_sec": 0, 00:15:34.514 "rw_mbytes_per_sec": 0, 00:15:34.514 "w_mbytes_per_sec": 0 00:15:34.514 }, 00:15:34.514 "block_size": 4096, 00:15:34.514 "claimed": false, 00:15:34.514 "driver_specific": { 00:15:34.514 "lvol": { 00:15:34.514 "base_bdev": "aio_bdev", 00:15:34.514 "clone": false, 00:15:34.514 "esnap_clone": false, 00:15:34.514 "lvol_store_uuid": "5e898e4e-091c-42a4-a9b6-b3dd625053eb", 00:15:34.514 "snapshot": false, 00:15:34.514 "thin_provision": false 00:15:34.514 } 00:15:34.514 }, 00:15:34.514 "name": "cdf180b6-d9c7-4867-a78b-d70fb7bbd36e", 00:15:34.514 "num_blocks": 38912, 00:15:34.514 "product_name": "Logical Volume", 00:15:34.514 "supported_io_types": { 00:15:34.514 "abort": false, 00:15:34.514 "compare": false, 00:15:34.514 "compare_and_write": false, 00:15:34.514 "flush": false, 00:15:34.514 "nvme_admin": false, 00:15:34.514 "nvme_io": false, 00:15:34.514 "read": true, 00:15:34.514 "reset": true, 00:15:34.514 "unmap": true, 00:15:34.514 "write": true, 00:15:34.514 "write_zeroes": true 00:15:34.514 }, 00:15:34.514 "uuid": "cdf180b6-d9c7-4867-a78b-d70fb7bbd36e", 00:15:34.514 "zoned": false 00:15:34.514 } 00:15:34.514 ] 00:15:34.514 10:15:53 -- common/autotest_common.sh@905 -- # return 0 00:15:34.514 10:15:53 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e898e4e-091c-42a4-a9b6-b3dd625053eb 00:15:34.514 10:15:53 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:34.772 10:15:54 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:34.772 10:15:54 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5e898e4e-091c-42a4-a9b6-b3dd625053eb 00:15:34.772 10:15:54 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:35.030 10:15:54 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:35.030 10:15:54 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete cdf180b6-d9c7-4867-a78b-d70fb7bbd36e 00:15:35.595 10:15:54 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5e898e4e-091c-42a4-a9b6-b3dd625053eb 00:15:35.853 10:15:55 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:36.420 10:15:55 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:36.987 ************************************ 00:15:36.987 END TEST lvs_grow_dirty 00:15:36.987 ************************************ 00:15:36.987 00:15:36.987 real 0m23.566s 00:15:36.987 user 0m46.643s 00:15:36.987 sys 0m7.990s 00:15:36.987 10:15:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:36.987 10:15:56 -- common/autotest_common.sh@10 -- # set +x 00:15:36.987 10:15:56 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:36.987 10:15:56 -- common/autotest_common.sh@806 -- # type=--id 00:15:36.987 10:15:56 -- common/autotest_common.sh@807 -- # id=0 00:15:36.987 10:15:56 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:36.987 10:15:56 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:36.987 10:15:56 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:36.987 10:15:56 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:36.987 10:15:56 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:36.987 10:15:56 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:36.987 nvmf_trace.0 00:15:36.987 10:15:56 -- common/autotest_common.sh@821 -- # return 0 00:15:36.987 10:15:56 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:36.987 10:15:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:36.987 10:15:56 -- nvmf/common.sh@116 -- # sync 00:15:37.245 10:15:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:37.245 10:15:56 -- nvmf/common.sh@119 -- # set +e 00:15:37.245 10:15:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:37.245 10:15:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:37.245 rmmod nvme_tcp 00:15:37.245 rmmod nvme_fabrics 00:15:37.245 rmmod nvme_keyring 00:15:37.245 10:15:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:37.245 10:15:56 -- nvmf/common.sh@123 -- # set -e 00:15:37.245 10:15:56 -- nvmf/common.sh@124 -- # return 0 00:15:37.245 10:15:56 -- nvmf/common.sh@477 -- # '[' -n 84121 ']' 00:15:37.245 10:15:56 -- nvmf/common.sh@478 -- # killprocess 84121 00:15:37.245 10:15:56 -- common/autotest_common.sh@936 -- # '[' -z 84121 ']' 00:15:37.245 10:15:56 -- common/autotest_common.sh@940 -- # kill -0 84121 00:15:37.245 10:15:56 -- common/autotest_common.sh@941 -- # uname 00:15:37.245 10:15:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:37.245 10:15:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84121 00:15:37.245 10:15:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:37.245 10:15:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:37.245 killing process with pid 84121 00:15:37.245 10:15:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84121' 00:15:37.245 10:15:56 -- common/autotest_common.sh@955 -- # kill 84121 00:15:37.245 10:15:56 -- common/autotest_common.sh@960 -- # wait 84121 00:15:37.504 10:15:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:37.504 10:15:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:37.504 10:15:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:37.504 10:15:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:37.504 10:15:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:37.504 10:15:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.504 10:15:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.504 10:15:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.504 10:15:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:37.504 00:15:37.504 real 0m44.582s 00:15:37.504 user 1m13.071s 00:15:37.504 sys 0m10.800s 00:15:37.504 10:15:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:37.504 10:15:56 -- common/autotest_common.sh@10 -- # set +x 00:15:37.504 ************************************ 00:15:37.504 END TEST nvmf_lvs_grow 00:15:37.504 ************************************ 00:15:37.504 10:15:56 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:37.504 10:15:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:37.504 10:15:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:37.504 10:15:56 -- common/autotest_common.sh@10 -- # set +x 00:15:37.504 ************************************ 00:15:37.504 START TEST nvmf_bdev_io_wait 00:15:37.504 ************************************ 00:15:37.504 10:15:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:37.504 * Looking for test storage... 00:15:37.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:37.504 10:15:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:37.504 10:15:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:37.504 10:15:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:37.762 10:15:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:37.762 10:15:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:37.762 10:15:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:37.762 10:15:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:37.762 10:15:57 -- scripts/common.sh@335 -- # IFS=.-: 00:15:37.762 10:15:57 -- scripts/common.sh@335 -- # read -ra ver1 00:15:37.762 10:15:57 -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.762 10:15:57 -- scripts/common.sh@336 -- # read -ra ver2 00:15:37.762 10:15:57 -- scripts/common.sh@337 -- # local 'op=<' 00:15:37.762 10:15:57 -- scripts/common.sh@339 -- # ver1_l=2 00:15:37.762 10:15:57 -- scripts/common.sh@340 -- # ver2_l=1 00:15:37.762 10:15:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:37.762 10:15:57 -- scripts/common.sh@343 -- # case "$op" in 00:15:37.762 10:15:57 -- scripts/common.sh@344 -- # : 1 00:15:37.762 10:15:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:37.762 10:15:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.762 10:15:57 -- scripts/common.sh@364 -- # decimal 1 00:15:37.762 10:15:57 -- scripts/common.sh@352 -- # local d=1 00:15:37.762 10:15:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.762 10:15:57 -- scripts/common.sh@354 -- # echo 1 00:15:37.762 10:15:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:37.762 10:15:57 -- scripts/common.sh@365 -- # decimal 2 00:15:37.762 10:15:57 -- scripts/common.sh@352 -- # local d=2 00:15:37.762 10:15:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.762 10:15:57 -- scripts/common.sh@354 -- # echo 2 00:15:37.762 10:15:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:37.762 10:15:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:37.762 10:15:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:37.762 10:15:57 -- scripts/common.sh@367 -- # return 0 00:15:37.762 10:15:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.762 10:15:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:37.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.762 --rc genhtml_branch_coverage=1 00:15:37.762 --rc genhtml_function_coverage=1 00:15:37.762 --rc genhtml_legend=1 00:15:37.762 --rc geninfo_all_blocks=1 00:15:37.762 --rc geninfo_unexecuted_blocks=1 00:15:37.762 00:15:37.762 ' 00:15:37.762 10:15:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:37.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.762 --rc genhtml_branch_coverage=1 00:15:37.762 --rc genhtml_function_coverage=1 00:15:37.762 --rc genhtml_legend=1 00:15:37.762 --rc geninfo_all_blocks=1 00:15:37.762 --rc geninfo_unexecuted_blocks=1 00:15:37.762 00:15:37.762 ' 00:15:37.762 10:15:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:37.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.762 --rc genhtml_branch_coverage=1 00:15:37.762 --rc genhtml_function_coverage=1 00:15:37.762 --rc genhtml_legend=1 00:15:37.762 --rc geninfo_all_blocks=1 00:15:37.762 --rc geninfo_unexecuted_blocks=1 00:15:37.762 00:15:37.762 ' 00:15:37.762 10:15:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:37.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.762 --rc genhtml_branch_coverage=1 00:15:37.762 --rc genhtml_function_coverage=1 00:15:37.762 --rc genhtml_legend=1 00:15:37.762 --rc geninfo_all_blocks=1 00:15:37.762 --rc geninfo_unexecuted_blocks=1 00:15:37.762 00:15:37.762 ' 00:15:37.762 10:15:57 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:37.762 10:15:57 -- nvmf/common.sh@7 -- # uname -s 00:15:37.762 10:15:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.762 10:15:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.762 10:15:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.762 10:15:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.762 10:15:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.762 10:15:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.762 10:15:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.762 10:15:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.762 10:15:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.762 10:15:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.762 10:15:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:15:37.762 10:15:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:15:37.762 10:15:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.762 10:15:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.762 10:15:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:37.762 10:15:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:37.762 10:15:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.762 10:15:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.762 10:15:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.762 10:15:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.762 10:15:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.762 10:15:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.762 10:15:57 -- paths/export.sh@5 -- # export PATH 00:15:37.763 10:15:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.763 10:15:57 -- nvmf/common.sh@46 -- # : 0 00:15:37.763 10:15:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:37.763 10:15:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:37.763 10:15:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:37.763 10:15:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.763 10:15:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.763 10:15:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:37.763 10:15:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:37.763 10:15:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:37.763 10:15:57 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:37.763 10:15:57 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:37.763 10:15:57 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:37.763 10:15:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:37.763 10:15:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.763 10:15:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:37.763 10:15:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:37.763 10:15:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:37.763 10:15:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.763 10:15:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.763 10:15:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.763 10:15:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:37.763 10:15:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:37.763 10:15:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:37.763 10:15:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:37.763 10:15:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:37.763 10:15:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:37.763 10:15:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.763 10:15:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:37.763 10:15:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:37.763 10:15:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:37.763 10:15:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:37.763 10:15:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:37.763 10:15:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:37.763 10:15:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.763 10:15:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:37.763 10:15:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:37.763 10:15:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:37.763 10:15:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:37.763 10:15:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:37.763 10:15:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:37.763 Cannot find device "nvmf_tgt_br" 00:15:37.763 10:15:57 -- nvmf/common.sh@154 -- # true 00:15:37.763 10:15:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:37.763 Cannot find device "nvmf_tgt_br2" 00:15:37.763 10:15:57 -- nvmf/common.sh@155 -- # true 00:15:37.763 10:15:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:37.763 10:15:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:37.763 Cannot find device "nvmf_tgt_br" 00:15:37.763 10:15:57 -- nvmf/common.sh@157 -- # true 00:15:37.763 10:15:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:37.763 Cannot find device "nvmf_tgt_br2" 00:15:37.763 10:15:57 -- nvmf/common.sh@158 -- # true 00:15:37.763 10:15:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:37.763 10:15:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:37.763 10:15:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:37.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:37.763 10:15:57 -- nvmf/common.sh@161 -- # true 00:15:37.763 10:15:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:37.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:37.763 10:15:57 -- nvmf/common.sh@162 -- # true 00:15:37.763 10:15:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:37.763 10:15:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:38.028 10:15:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:38.028 10:15:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:38.028 10:15:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:38.028 10:15:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:38.028 10:15:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:38.028 10:15:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:38.028 10:15:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:38.028 10:15:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:38.028 10:15:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:38.028 10:15:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:38.028 10:15:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:38.028 10:15:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:38.028 10:15:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:38.028 10:15:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:38.028 10:15:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:38.028 10:15:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:38.028 10:15:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:38.028 10:15:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:38.028 10:15:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:38.028 10:15:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:38.028 10:15:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:38.028 10:15:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:38.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:15:38.028 00:15:38.028 --- 10.0.0.2 ping statistics --- 00:15:38.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.028 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:15:38.028 10:15:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:38.028 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:38.028 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:15:38.028 00:15:38.028 --- 10.0.0.3 ping statistics --- 00:15:38.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.028 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:38.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:38.028 00:15:38.028 --- 10.0.0.1 ping statistics --- 00:15:38.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.028 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:38.028 10:15:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:38.028 10:15:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.028 10:15:57 -- nvmf/common.sh@421 -- # return 0 00:15:38.028 10:15:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:38.028 10:15:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.028 10:15:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:38.028 10:15:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:38.028 10:15:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.028 10:15:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:38.028 10:15:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:38.028 10:15:57 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:38.028 10:15:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:38.028 10:15:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:38.028 10:15:57 -- common/autotest_common.sh@10 -- # set +x 00:15:38.028 10:15:57 -- nvmf/common.sh@469 -- # nvmfpid=84569 00:15:38.028 10:15:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:38.028 10:15:57 -- nvmf/common.sh@470 -- # waitforlisten 84569 00:15:38.028 10:15:57 -- common/autotest_common.sh@829 -- # '[' -z 84569 ']' 00:15:38.028 10:15:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.028 10:15:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.028 10:15:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.028 10:15:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.028 10:15:57 -- common/autotest_common.sh@10 -- # set +x 00:15:38.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.286 [2024-11-19 10:15:57.606134] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:38.286 [2024-11-19 10:15:57.606278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.286 [2024-11-19 10:15:57.755520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.286 [2024-11-19 10:15:57.802632] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:38.286 [2024-11-19 10:15:57.803001] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.286 [2024-11-19 10:15:57.803030] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.286 [2024-11-19 10:15:57.803047] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.286 [2024-11-19 10:15:57.803166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.286 [2024-11-19 10:15:57.803258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.286 [2024-11-19 10:15:57.804316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.286 [2024-11-19 10:15:57.804342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.603 10:15:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.603 10:15:57 -- common/autotest_common.sh@862 -- # return 0 00:15:38.603 10:15:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:38.603 10:15:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:38.603 10:15:57 -- common/autotest_common.sh@10 -- # set +x 00:15:38.603 10:15:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.603 10:15:57 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:38.603 10:15:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.603 10:15:57 -- common/autotest_common.sh@10 -- # set +x 00:15:38.603 10:15:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.603 10:15:57 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:38.603 10:15:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.603 10:15:57 -- common/autotest_common.sh@10 -- # set +x 00:15:38.603 10:15:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.603 10:15:58 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:38.603 10:15:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.603 10:15:58 -- common/autotest_common.sh@10 -- # set +x 00:15:38.603 [2024-11-19 10:15:58.034683] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.603 10:15:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.603 10:15:58 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:38.603 10:15:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.603 10:15:58 -- common/autotest_common.sh@10 -- # set +x 00:15:38.603 Malloc0 00:15:38.603 10:15:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.603 10:15:58 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:38.603 10:15:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.603 10:15:58 -- common/autotest_common.sh@10 -- # set +x 00:15:38.603 10:15:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.603 10:15:58 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:38.603 10:15:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.603 10:15:58 -- common/autotest_common.sh@10 -- # set +x 00:15:38.603 10:15:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.603 10:15:58 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.603 10:15:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.603 10:15:58 -- common/autotest_common.sh@10 -- # set +x 00:15:38.603 [2024-11-19 10:15:58.084542] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.603 10:15:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.603 10:15:58 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=84615 00:15:38.603 10:15:58 -- target/bdev_io_wait.sh@30 -- # READ_PID=84617 00:15:38.603 10:15:58 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:38.603 10:15:58 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:38.603 10:15:58 -- nvmf/common.sh@520 -- # config=() 00:15:38.603 10:15:58 -- nvmf/common.sh@520 -- # local subsystem config 00:15:38.603 10:15:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:38.603 10:15:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:38.603 { 00:15:38.603 "params": { 00:15:38.603 "name": "Nvme$subsystem", 00:15:38.603 "trtype": "$TEST_TRANSPORT", 00:15:38.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:38.603 "adrfam": "ipv4", 00:15:38.603 "trsvcid": "$NVMF_PORT", 00:15:38.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:38.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:38.603 "hdgst": ${hdgst:-false}, 00:15:38.603 "ddgst": ${ddgst:-false} 00:15:38.603 }, 00:15:38.603 "method": "bdev_nvme_attach_controller" 00:15:38.603 } 00:15:38.603 EOF 00:15:38.604 )") 00:15:38.604 10:15:58 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:38.604 10:15:58 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:38.604 10:15:58 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=84619 00:15:38.604 10:15:58 -- nvmf/common.sh@520 -- # config=() 00:15:38.604 10:15:58 -- nvmf/common.sh@520 -- # local subsystem config 00:15:38.604 10:15:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:38.604 10:15:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:38.604 { 00:15:38.604 "params": { 00:15:38.604 "name": "Nvme$subsystem", 00:15:38.604 "trtype": "$TEST_TRANSPORT", 00:15:38.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:38.604 "adrfam": "ipv4", 00:15:38.604 "trsvcid": "$NVMF_PORT", 00:15:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:38.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:38.604 "hdgst": ${hdgst:-false}, 00:15:38.604 "ddgst": ${ddgst:-false} 00:15:38.604 }, 00:15:38.604 "method": "bdev_nvme_attach_controller" 00:15:38.604 } 00:15:38.604 EOF 00:15:38.604 )") 00:15:38.604 10:15:58 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:38.604 10:15:58 -- nvmf/common.sh@542 -- # cat 00:15:38.604 10:15:58 -- nvmf/common.sh@542 -- # cat 00:15:38.604 10:15:58 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:38.604 10:15:58 -- nvmf/common.sh@520 -- # config=() 00:15:38.604 10:15:58 -- nvmf/common.sh@520 -- # local subsystem config 00:15:38.604 10:15:58 -- nvmf/common.sh@544 -- # jq . 00:15:38.604 10:15:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:38.604 10:15:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:38.604 { 00:15:38.604 "params": { 00:15:38.604 "name": "Nvme$subsystem", 00:15:38.604 "trtype": "$TEST_TRANSPORT", 00:15:38.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:38.604 "adrfam": "ipv4", 00:15:38.604 "trsvcid": "$NVMF_PORT", 00:15:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:38.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:38.604 "hdgst": ${hdgst:-false}, 00:15:38.604 "ddgst": ${ddgst:-false} 00:15:38.604 }, 00:15:38.604 "method": "bdev_nvme_attach_controller" 00:15:38.604 } 00:15:38.604 EOF 00:15:38.604 )") 00:15:38.604 10:15:58 -- nvmf/common.sh@544 -- # jq . 00:15:38.604 10:15:58 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=84627 00:15:38.604 10:15:58 -- target/bdev_io_wait.sh@35 -- # sync 00:15:38.604 10:15:58 -- nvmf/common.sh@545 -- # IFS=, 00:15:38.604 10:15:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:38.604 "params": { 00:15:38.604 "name": "Nvme1", 00:15:38.604 "trtype": "tcp", 00:15:38.604 "traddr": "10.0.0.2", 00:15:38.604 "adrfam": "ipv4", 00:15:38.604 "trsvcid": "4420", 00:15:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:38.604 "hdgst": false, 00:15:38.604 "ddgst": false 00:15:38.604 }, 00:15:38.604 "method": "bdev_nvme_attach_controller" 00:15:38.604 }' 00:15:38.604 10:15:58 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:38.604 10:15:58 -- nvmf/common.sh@542 -- # cat 00:15:38.604 10:15:58 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:38.604 10:15:58 -- nvmf/common.sh@520 -- # config=() 00:15:38.604 10:15:58 -- nvmf/common.sh@520 -- # local subsystem config 00:15:38.604 10:15:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:38.604 10:15:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:38.604 { 00:15:38.604 "params": { 00:15:38.604 "name": "Nvme$subsystem", 00:15:38.604 "trtype": "$TEST_TRANSPORT", 00:15:38.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:38.604 "adrfam": "ipv4", 00:15:38.604 "trsvcid": "$NVMF_PORT", 00:15:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:38.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:38.604 "hdgst": ${hdgst:-false}, 00:15:38.604 "ddgst": ${ddgst:-false} 00:15:38.604 }, 00:15:38.604 "method": "bdev_nvme_attach_controller" 00:15:38.604 } 00:15:38.604 EOF 00:15:38.604 )") 00:15:38.604 10:15:58 -- nvmf/common.sh@542 -- # cat 00:15:38.604 10:15:58 -- nvmf/common.sh@544 -- # jq . 00:15:38.604 10:15:58 -- nvmf/common.sh@544 -- # jq . 00:15:38.604 10:15:58 -- nvmf/common.sh@545 -- # IFS=, 00:15:38.604 10:15:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:38.604 "params": { 00:15:38.604 "name": "Nvme1", 00:15:38.604 "trtype": "tcp", 00:15:38.604 "traddr": "10.0.0.2", 00:15:38.604 "adrfam": "ipv4", 00:15:38.604 "trsvcid": "4420", 00:15:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:38.604 "hdgst": false, 00:15:38.604 "ddgst": false 00:15:38.604 }, 00:15:38.604 "method": "bdev_nvme_attach_controller" 00:15:38.604 }' 00:15:38.604 10:15:58 -- nvmf/common.sh@545 -- # IFS=, 00:15:38.604 10:15:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:38.604 "params": { 00:15:38.604 "name": "Nvme1", 00:15:38.604 "trtype": "tcp", 00:15:38.604 "traddr": "10.0.0.2", 00:15:38.604 "adrfam": "ipv4", 00:15:38.604 "trsvcid": "4420", 00:15:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:38.604 "hdgst": false, 00:15:38.604 "ddgst": false 00:15:38.604 }, 00:15:38.604 "method": "bdev_nvme_attach_controller" 00:15:38.604 }' 00:15:38.604 10:15:58 -- nvmf/common.sh@545 -- # IFS=, 00:15:38.604 10:15:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:38.604 "params": { 00:15:38.604 "name": "Nvme1", 00:15:38.604 "trtype": "tcp", 00:15:38.604 "traddr": "10.0.0.2", 00:15:38.604 "adrfam": "ipv4", 00:15:38.604 "trsvcid": "4420", 00:15:38.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:38.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:38.604 "hdgst": false, 00:15:38.604 "ddgst": false 00:15:38.604 }, 00:15:38.604 "method": "bdev_nvme_attach_controller" 00:15:38.604 }' 00:15:38.862 [2024-11-19 10:15:58.157032] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:38.862 [2024-11-19 10:15:58.157151] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:38.863 [2024-11-19 10:15:58.180807] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:38.863 [2024-11-19 10:15:58.180933] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:38.863 10:15:58 -- target/bdev_io_wait.sh@37 -- # wait 84615 00:15:38.863 [2024-11-19 10:15:58.189463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:38.863 [2024-11-19 10:15:58.189583] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:38.863 [2024-11-19 10:15:58.190833] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:38.863 [2024-11-19 10:15:58.191151] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:38.863 [2024-11-19 10:15:58.346770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.863 [2024-11-19 10:15:58.372184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:39.121 [2024-11-19 10:15:58.429893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.121 [2024-11-19 10:15:58.457674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:39.121 [2024-11-19 10:15:58.479619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.121 [2024-11-19 10:15:58.487661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.121 Running I/O for 1 seconds... 00:15:39.121 [2024-11-19 10:15:58.501778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:39.121 [2024-11-19 10:15:58.518749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:39.121 Running I/O for 1 seconds... 00:15:39.121 Running I/O for 1 seconds... 00:15:39.121 Running I/O for 1 seconds... 00:15:40.057 00:15:40.057 Latency(us) 00:15:40.057 [2024-11-19T10:15:59.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.057 [2024-11-19T10:15:59.603Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:40.057 Nvme1n1 : 1.00 151424.25 591.50 0.00 0.00 842.02 353.75 1839.48 00:15:40.057 [2024-11-19T10:15:59.603Z] =================================================================================================================== 00:15:40.057 [2024-11-19T10:15:59.603Z] Total : 151424.25 591.50 0.00 0.00 842.02 353.75 1839.48 00:15:40.316 00:15:40.316 Latency(us) 00:15:40.316 [2024-11-19T10:15:59.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.317 [2024-11-19T10:15:59.863Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:40.317 Nvme1n1 : 1.02 4464.24 17.44 0.00 0.00 28383.42 7536.64 49807.36 00:15:40.317 [2024-11-19T10:15:59.863Z] =================================================================================================================== 00:15:40.317 [2024-11-19T10:15:59.863Z] Total : 4464.24 17.44 0.00 0.00 28383.42 7536.64 49807.36 00:15:40.317 00:15:40.317 Latency(us) 00:15:40.317 [2024-11-19T10:15:59.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.317 [2024-11-19T10:15:59.863Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:40.317 Nvme1n1 : 1.01 7556.71 29.52 0.00 0.00 16859.16 6106.76 29669.93 00:15:40.317 [2024-11-19T10:15:59.863Z] =================================================================================================================== 00:15:40.317 [2024-11-19T10:15:59.863Z] Total : 7556.71 29.52 0.00 0.00 16859.16 6106.76 29669.93 00:15:40.317 00:15:40.317 Latency(us) 00:15:40.317 [2024-11-19T10:15:59.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.317 [2024-11-19T10:15:59.863Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:40.317 Nvme1n1 : 1.01 4237.44 16.55 0.00 0.00 30073.09 7387.69 53858.68 00:15:40.317 [2024-11-19T10:15:59.863Z] =================================================================================================================== 00:15:40.317 [2024-11-19T10:15:59.863Z] Total : 4237.44 16.55 0.00 0.00 30073.09 7387.69 53858.68 00:15:40.317 10:15:59 -- target/bdev_io_wait.sh@38 -- # wait 84617 00:15:40.575 10:15:59 -- target/bdev_io_wait.sh@39 -- # wait 84619 00:15:40.575 10:15:59 -- target/bdev_io_wait.sh@40 -- # wait 84627 00:15:40.575 10:15:59 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:40.575 10:15:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.575 10:15:59 -- common/autotest_common.sh@10 -- # set +x 00:15:40.575 10:15:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.575 10:15:59 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:40.575 10:15:59 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:40.575 10:15:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:40.575 10:15:59 -- nvmf/common.sh@116 -- # sync 00:15:40.575 10:15:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:40.575 10:15:59 -- nvmf/common.sh@119 -- # set +e 00:15:40.575 10:15:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:40.575 10:15:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:40.575 rmmod nvme_tcp 00:15:40.575 rmmod nvme_fabrics 00:15:40.575 rmmod nvme_keyring 00:15:40.575 10:15:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:40.575 10:15:59 -- nvmf/common.sh@123 -- # set -e 00:15:40.575 10:15:59 -- nvmf/common.sh@124 -- # return 0 00:15:40.575 10:15:59 -- nvmf/common.sh@477 -- # '[' -n 84569 ']' 00:15:40.575 10:15:59 -- nvmf/common.sh@478 -- # killprocess 84569 00:15:40.575 10:15:59 -- common/autotest_common.sh@936 -- # '[' -z 84569 ']' 00:15:40.575 10:15:59 -- common/autotest_common.sh@940 -- # kill -0 84569 00:15:40.575 10:15:59 -- common/autotest_common.sh@941 -- # uname 00:15:40.575 10:15:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:40.575 10:15:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84569 00:15:40.575 killing process with pid 84569 00:15:40.575 10:15:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:40.576 10:15:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:40.576 10:16:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84569' 00:15:40.576 10:16:00 -- common/autotest_common.sh@955 -- # kill 84569 00:15:40.576 10:16:00 -- common/autotest_common.sh@960 -- # wait 84569 00:15:40.834 10:16:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:40.834 10:16:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:40.834 10:16:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:40.834 10:16:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:40.834 10:16:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:40.834 10:16:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.834 10:16:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.834 10:16:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.834 10:16:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:40.834 00:15:40.834 real 0m3.235s 00:15:40.834 user 0m13.916s 00:15:40.834 sys 0m1.707s 00:15:40.834 10:16:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:40.834 ************************************ 00:15:40.834 END TEST nvmf_bdev_io_wait 00:15:40.834 ************************************ 00:15:40.834 10:16:00 -- common/autotest_common.sh@10 -- # set +x 00:15:40.834 10:16:00 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:40.834 10:16:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:40.834 10:16:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:40.834 10:16:00 -- common/autotest_common.sh@10 -- # set +x 00:15:40.834 ************************************ 00:15:40.834 START TEST nvmf_queue_depth 00:15:40.834 ************************************ 00:15:40.834 10:16:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:40.834 * Looking for test storage... 00:15:40.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:40.834 10:16:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:40.834 10:16:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:40.834 10:16:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:41.093 10:16:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:41.093 10:16:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:41.093 10:16:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:41.093 10:16:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:41.093 10:16:00 -- scripts/common.sh@335 -- # IFS=.-: 00:15:41.093 10:16:00 -- scripts/common.sh@335 -- # read -ra ver1 00:15:41.093 10:16:00 -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.093 10:16:00 -- scripts/common.sh@336 -- # read -ra ver2 00:15:41.093 10:16:00 -- scripts/common.sh@337 -- # local 'op=<' 00:15:41.093 10:16:00 -- scripts/common.sh@339 -- # ver1_l=2 00:15:41.093 10:16:00 -- scripts/common.sh@340 -- # ver2_l=1 00:15:41.093 10:16:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:41.093 10:16:00 -- scripts/common.sh@343 -- # case "$op" in 00:15:41.093 10:16:00 -- scripts/common.sh@344 -- # : 1 00:15:41.093 10:16:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:41.093 10:16:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.093 10:16:00 -- scripts/common.sh@364 -- # decimal 1 00:15:41.093 10:16:00 -- scripts/common.sh@352 -- # local d=1 00:15:41.093 10:16:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.093 10:16:00 -- scripts/common.sh@354 -- # echo 1 00:15:41.093 10:16:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:41.093 10:16:00 -- scripts/common.sh@365 -- # decimal 2 00:15:41.093 10:16:00 -- scripts/common.sh@352 -- # local d=2 00:15:41.093 10:16:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.093 10:16:00 -- scripts/common.sh@354 -- # echo 2 00:15:41.093 10:16:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:41.093 10:16:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:41.093 10:16:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:41.093 10:16:00 -- scripts/common.sh@367 -- # return 0 00:15:41.093 10:16:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.093 10:16:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:41.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.093 --rc genhtml_branch_coverage=1 00:15:41.093 --rc genhtml_function_coverage=1 00:15:41.093 --rc genhtml_legend=1 00:15:41.093 --rc geninfo_all_blocks=1 00:15:41.093 --rc geninfo_unexecuted_blocks=1 00:15:41.093 00:15:41.093 ' 00:15:41.093 10:16:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:41.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.093 --rc genhtml_branch_coverage=1 00:15:41.093 --rc genhtml_function_coverage=1 00:15:41.093 --rc genhtml_legend=1 00:15:41.093 --rc geninfo_all_blocks=1 00:15:41.093 --rc geninfo_unexecuted_blocks=1 00:15:41.093 00:15:41.093 ' 00:15:41.093 10:16:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:41.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.093 --rc genhtml_branch_coverage=1 00:15:41.093 --rc genhtml_function_coverage=1 00:15:41.093 --rc genhtml_legend=1 00:15:41.093 --rc geninfo_all_blocks=1 00:15:41.093 --rc geninfo_unexecuted_blocks=1 00:15:41.093 00:15:41.093 ' 00:15:41.093 10:16:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:41.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.093 --rc genhtml_branch_coverage=1 00:15:41.093 --rc genhtml_function_coverage=1 00:15:41.093 --rc genhtml_legend=1 00:15:41.093 --rc geninfo_all_blocks=1 00:15:41.093 --rc geninfo_unexecuted_blocks=1 00:15:41.093 00:15:41.093 ' 00:15:41.093 10:16:00 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:41.093 10:16:00 -- nvmf/common.sh@7 -- # uname -s 00:15:41.093 10:16:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.093 10:16:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.093 10:16:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.093 10:16:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.093 10:16:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.093 10:16:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.093 10:16:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.093 10:16:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.093 10:16:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.093 10:16:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.093 10:16:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:15:41.093 10:16:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:15:41.093 10:16:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.093 10:16:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.093 10:16:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:41.093 10:16:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:41.093 10:16:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.093 10:16:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.093 10:16:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.093 10:16:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.093 10:16:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.093 10:16:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.094 10:16:00 -- paths/export.sh@5 -- # export PATH 00:15:41.094 10:16:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.094 10:16:00 -- nvmf/common.sh@46 -- # : 0 00:15:41.094 10:16:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:41.094 10:16:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:41.094 10:16:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:41.094 10:16:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.094 10:16:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.094 10:16:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:41.094 10:16:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:41.094 10:16:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:41.094 10:16:00 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:41.094 10:16:00 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:41.094 10:16:00 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:41.094 10:16:00 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:41.094 10:16:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:41.094 10:16:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.094 10:16:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:41.094 10:16:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:41.094 10:16:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:41.094 10:16:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.094 10:16:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.094 10:16:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.094 10:16:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:41.094 10:16:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:41.094 10:16:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:41.094 10:16:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:41.094 10:16:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:41.094 10:16:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:41.094 10:16:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.094 10:16:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:41.094 10:16:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:41.094 10:16:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:41.094 10:16:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:41.094 10:16:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:41.094 10:16:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:41.094 10:16:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.094 10:16:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:41.094 10:16:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:41.094 10:16:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:41.094 10:16:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:41.094 10:16:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:41.094 10:16:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:41.094 Cannot find device "nvmf_tgt_br" 00:15:41.094 10:16:00 -- nvmf/common.sh@154 -- # true 00:15:41.094 10:16:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.094 Cannot find device "nvmf_tgt_br2" 00:15:41.094 10:16:00 -- nvmf/common.sh@155 -- # true 00:15:41.094 10:16:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:41.094 10:16:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:41.094 Cannot find device "nvmf_tgt_br" 00:15:41.094 10:16:00 -- nvmf/common.sh@157 -- # true 00:15:41.094 10:16:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:41.094 Cannot find device "nvmf_tgt_br2" 00:15:41.094 10:16:00 -- nvmf/common.sh@158 -- # true 00:15:41.094 10:16:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:41.094 10:16:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:41.094 10:16:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.094 10:16:00 -- nvmf/common.sh@161 -- # true 00:15:41.094 10:16:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.094 10:16:00 -- nvmf/common.sh@162 -- # true 00:15:41.094 10:16:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:41.094 10:16:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:41.094 10:16:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:41.094 10:16:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:41.094 10:16:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:41.094 10:16:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:41.094 10:16:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:41.094 10:16:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:41.094 10:16:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:41.353 10:16:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:41.353 10:16:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:41.353 10:16:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:41.353 10:16:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:41.353 10:16:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:41.353 10:16:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:41.353 10:16:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:41.353 10:16:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:41.353 10:16:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:41.353 10:16:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:41.353 10:16:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:41.353 10:16:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:41.353 10:16:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:41.353 10:16:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:41.353 10:16:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:41.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:41.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:15:41.353 00:15:41.353 --- 10.0.0.2 ping statistics --- 00:15:41.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.353 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:15:41.353 10:16:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:41.353 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:41.353 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:15:41.353 00:15:41.353 --- 10.0.0.3 ping statistics --- 00:15:41.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.353 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:15:41.353 10:16:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:41.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:41.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:41.353 00:15:41.353 --- 10.0.0.1 ping statistics --- 00:15:41.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.353 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:41.353 10:16:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:41.353 10:16:00 -- nvmf/common.sh@421 -- # return 0 00:15:41.353 10:16:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:41.353 10:16:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:41.353 10:16:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:41.353 10:16:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:41.353 10:16:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:41.353 10:16:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:41.353 10:16:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:41.353 10:16:00 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:41.353 10:16:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:41.353 10:16:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:41.353 10:16:00 -- common/autotest_common.sh@10 -- # set +x 00:15:41.353 10:16:00 -- nvmf/common.sh@469 -- # nvmfpid=84832 00:15:41.353 10:16:00 -- nvmf/common.sh@470 -- # waitforlisten 84832 00:15:41.353 10:16:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:41.353 10:16:00 -- common/autotest_common.sh@829 -- # '[' -z 84832 ']' 00:15:41.353 10:16:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.353 10:16:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.353 10:16:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.353 10:16:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.353 10:16:00 -- common/autotest_common.sh@10 -- # set +x 00:15:41.353 [2024-11-19 10:16:00.847722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:41.353 [2024-11-19 10:16:00.847865] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.612 [2024-11-19 10:16:00.991476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.612 [2024-11-19 10:16:01.025802] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:41.612 [2024-11-19 10:16:01.026169] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.612 [2024-11-19 10:16:01.026191] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.612 [2024-11-19 10:16:01.026200] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.612 [2024-11-19 10:16:01.026234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.546 10:16:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.546 10:16:01 -- common/autotest_common.sh@862 -- # return 0 00:15:42.546 10:16:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:42.546 10:16:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:42.546 10:16:01 -- common/autotest_common.sh@10 -- # set +x 00:15:42.546 10:16:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.546 10:16:01 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:42.546 10:16:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.546 10:16:01 -- common/autotest_common.sh@10 -- # set +x 00:15:42.546 [2024-11-19 10:16:01.889507] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.546 10:16:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.546 10:16:01 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:42.546 10:16:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.546 10:16:01 -- common/autotest_common.sh@10 -- # set +x 00:15:42.546 Malloc0 00:15:42.546 10:16:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.546 10:16:01 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:42.546 10:16:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.546 10:16:01 -- common/autotest_common.sh@10 -- # set +x 00:15:42.546 10:16:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.546 10:16:01 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:42.546 10:16:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.547 10:16:01 -- common/autotest_common.sh@10 -- # set +x 00:15:42.547 10:16:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.547 10:16:01 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.547 10:16:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.547 10:16:01 -- common/autotest_common.sh@10 -- # set +x 00:15:42.547 [2024-11-19 10:16:01.939645] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:42.547 10:16:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.547 10:16:01 -- target/queue_depth.sh@30 -- # bdevperf_pid=84889 00:15:42.547 10:16:01 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:42.547 10:16:01 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:42.547 10:16:01 -- target/queue_depth.sh@33 -- # waitforlisten 84889 /var/tmp/bdevperf.sock 00:15:42.547 10:16:01 -- common/autotest_common.sh@829 -- # '[' -z 84889 ']' 00:15:42.547 10:16:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:42.547 10:16:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.547 10:16:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:42.547 10:16:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.547 10:16:01 -- common/autotest_common.sh@10 -- # set +x 00:15:42.547 [2024-11-19 10:16:01.997334] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:42.547 [2024-11-19 10:16:01.997872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84889 ] 00:15:42.806 [2024-11-19 10:16:02.130212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.806 [2024-11-19 10:16:02.164205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.806 10:16:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.806 10:16:02 -- common/autotest_common.sh@862 -- # return 0 00:15:42.806 10:16:02 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:42.806 10:16:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.806 10:16:02 -- common/autotest_common.sh@10 -- # set +x 00:15:42.806 NVMe0n1 00:15:42.806 10:16:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.806 10:16:02 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:43.075 Running I/O for 10 seconds... 00:15:53.079 00:15:53.079 Latency(us) 00:15:53.079 [2024-11-19T10:16:12.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.079 [2024-11-19T10:16:12.625Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:53.079 Verification LBA range: start 0x0 length 0x4000 00:15:53.079 NVMe0n1 : 10.06 13406.00 52.37 0.00 0.00 76091.20 14656.23 61961.31 00:15:53.079 [2024-11-19T10:16:12.625Z] =================================================================================================================== 00:15:53.079 [2024-11-19T10:16:12.625Z] Total : 13406.00 52.37 0.00 0.00 76091.20 14656.23 61961.31 00:15:53.079 0 00:15:53.079 10:16:12 -- target/queue_depth.sh@39 -- # killprocess 84889 00:15:53.079 10:16:12 -- common/autotest_common.sh@936 -- # '[' -z 84889 ']' 00:15:53.079 10:16:12 -- common/autotest_common.sh@940 -- # kill -0 84889 00:15:53.079 10:16:12 -- common/autotest_common.sh@941 -- # uname 00:15:53.079 10:16:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:53.079 10:16:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84889 00:15:53.079 killing process with pid 84889 00:15:53.079 Received shutdown signal, test time was about 10.000000 seconds 00:15:53.079 00:15:53.079 Latency(us) 00:15:53.079 [2024-11-19T10:16:12.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.079 [2024-11-19T10:16:12.625Z] =================================================================================================================== 00:15:53.079 [2024-11-19T10:16:12.625Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:53.079 10:16:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:53.079 10:16:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:53.079 10:16:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84889' 00:15:53.079 10:16:12 -- common/autotest_common.sh@955 -- # kill 84889 00:15:53.079 10:16:12 -- common/autotest_common.sh@960 -- # wait 84889 00:15:53.338 10:16:12 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:53.338 10:16:12 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:53.338 10:16:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:53.338 10:16:12 -- nvmf/common.sh@116 -- # sync 00:15:53.338 10:16:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:53.338 10:16:12 -- nvmf/common.sh@119 -- # set +e 00:15:53.338 10:16:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:53.338 10:16:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:53.338 rmmod nvme_tcp 00:15:53.338 rmmod nvme_fabrics 00:15:53.338 rmmod nvme_keyring 00:15:53.338 10:16:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:53.338 10:16:12 -- nvmf/common.sh@123 -- # set -e 00:15:53.338 10:16:12 -- nvmf/common.sh@124 -- # return 0 00:15:53.338 10:16:12 -- nvmf/common.sh@477 -- # '[' -n 84832 ']' 00:15:53.338 10:16:12 -- nvmf/common.sh@478 -- # killprocess 84832 00:15:53.338 10:16:12 -- common/autotest_common.sh@936 -- # '[' -z 84832 ']' 00:15:53.338 10:16:12 -- common/autotest_common.sh@940 -- # kill -0 84832 00:15:53.338 10:16:12 -- common/autotest_common.sh@941 -- # uname 00:15:53.338 10:16:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:53.338 10:16:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84832 00:15:53.338 killing process with pid 84832 00:15:53.338 10:16:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:53.338 10:16:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:53.338 10:16:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84832' 00:15:53.338 10:16:12 -- common/autotest_common.sh@955 -- # kill 84832 00:15:53.338 10:16:12 -- common/autotest_common.sh@960 -- # wait 84832 00:15:53.597 10:16:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:53.597 10:16:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:53.597 10:16:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:53.597 10:16:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:53.597 10:16:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:53.597 10:16:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.597 10:16:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.597 10:16:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.597 10:16:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:53.597 ************************************ 00:15:53.597 END TEST nvmf_queue_depth 00:15:53.597 ************************************ 00:15:53.597 00:15:53.597 real 0m12.822s 00:15:53.597 user 0m21.541s 00:15:53.597 sys 0m2.074s 00:15:53.597 10:16:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:53.597 10:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:53.597 10:16:13 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:53.597 10:16:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:53.597 10:16:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:53.597 10:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:53.597 ************************************ 00:15:53.597 START TEST nvmf_multipath 00:15:53.597 ************************************ 00:15:53.597 10:16:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:53.856 * Looking for test storage... 00:15:53.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:53.856 10:16:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:53.857 10:16:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:53.857 10:16:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:53.857 10:16:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:53.857 10:16:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:53.857 10:16:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:53.857 10:16:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:53.857 10:16:13 -- scripts/common.sh@335 -- # IFS=.-: 00:15:53.857 10:16:13 -- scripts/common.sh@335 -- # read -ra ver1 00:15:53.857 10:16:13 -- scripts/common.sh@336 -- # IFS=.-: 00:15:53.857 10:16:13 -- scripts/common.sh@336 -- # read -ra ver2 00:15:53.857 10:16:13 -- scripts/common.sh@337 -- # local 'op=<' 00:15:53.857 10:16:13 -- scripts/common.sh@339 -- # ver1_l=2 00:15:53.857 10:16:13 -- scripts/common.sh@340 -- # ver2_l=1 00:15:53.857 10:16:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:53.857 10:16:13 -- scripts/common.sh@343 -- # case "$op" in 00:15:53.857 10:16:13 -- scripts/common.sh@344 -- # : 1 00:15:53.857 10:16:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:53.857 10:16:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:53.857 10:16:13 -- scripts/common.sh@364 -- # decimal 1 00:15:53.857 10:16:13 -- scripts/common.sh@352 -- # local d=1 00:15:53.857 10:16:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:53.857 10:16:13 -- scripts/common.sh@354 -- # echo 1 00:15:53.857 10:16:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:53.857 10:16:13 -- scripts/common.sh@365 -- # decimal 2 00:15:53.857 10:16:13 -- scripts/common.sh@352 -- # local d=2 00:15:53.857 10:16:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:53.857 10:16:13 -- scripts/common.sh@354 -- # echo 2 00:15:53.857 10:16:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:53.857 10:16:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:53.857 10:16:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:53.857 10:16:13 -- scripts/common.sh@367 -- # return 0 00:15:53.857 10:16:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:53.857 10:16:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:53.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.857 --rc genhtml_branch_coverage=1 00:15:53.857 --rc genhtml_function_coverage=1 00:15:53.857 --rc genhtml_legend=1 00:15:53.857 --rc geninfo_all_blocks=1 00:15:53.857 --rc geninfo_unexecuted_blocks=1 00:15:53.857 00:15:53.857 ' 00:15:53.857 10:16:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:53.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.857 --rc genhtml_branch_coverage=1 00:15:53.857 --rc genhtml_function_coverage=1 00:15:53.857 --rc genhtml_legend=1 00:15:53.857 --rc geninfo_all_blocks=1 00:15:53.857 --rc geninfo_unexecuted_blocks=1 00:15:53.857 00:15:53.857 ' 00:15:53.857 10:16:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:53.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.857 --rc genhtml_branch_coverage=1 00:15:53.857 --rc genhtml_function_coverage=1 00:15:53.857 --rc genhtml_legend=1 00:15:53.857 --rc geninfo_all_blocks=1 00:15:53.857 --rc geninfo_unexecuted_blocks=1 00:15:53.857 00:15:53.857 ' 00:15:53.857 10:16:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:53.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:53.857 --rc genhtml_branch_coverage=1 00:15:53.857 --rc genhtml_function_coverage=1 00:15:53.857 --rc genhtml_legend=1 00:15:53.857 --rc geninfo_all_blocks=1 00:15:53.857 --rc geninfo_unexecuted_blocks=1 00:15:53.857 00:15:53.857 ' 00:15:53.857 10:16:13 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:53.857 10:16:13 -- nvmf/common.sh@7 -- # uname -s 00:15:53.857 10:16:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.857 10:16:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.857 10:16:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.857 10:16:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.857 10:16:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.857 10:16:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.857 10:16:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.857 10:16:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.857 10:16:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.857 10:16:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.857 10:16:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:15:53.857 10:16:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:15:53.857 10:16:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.857 10:16:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.857 10:16:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:53.857 10:16:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:53.857 10:16:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.857 10:16:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.857 10:16:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.857 10:16:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.857 10:16:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.857 10:16:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.857 10:16:13 -- paths/export.sh@5 -- # export PATH 00:15:53.857 10:16:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.857 10:16:13 -- nvmf/common.sh@46 -- # : 0 00:15:53.857 10:16:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:53.857 10:16:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:53.857 10:16:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:53.857 10:16:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.857 10:16:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.857 10:16:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:53.857 10:16:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:53.857 10:16:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:53.857 10:16:13 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:53.857 10:16:13 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:53.857 10:16:13 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:53.857 10:16:13 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.857 10:16:13 -- target/multipath.sh@43 -- # nvmftestinit 00:15:53.857 10:16:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:53.857 10:16:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.857 10:16:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:53.857 10:16:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:53.857 10:16:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:53.857 10:16:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.857 10:16:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.857 10:16:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.857 10:16:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:53.857 10:16:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:53.857 10:16:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:53.857 10:16:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:53.857 10:16:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:53.857 10:16:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:53.857 10:16:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.857 10:16:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.857 10:16:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:53.857 10:16:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:53.857 10:16:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:53.857 10:16:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:53.857 10:16:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:53.857 10:16:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.857 10:16:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:53.857 10:16:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:53.857 10:16:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:53.857 10:16:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:53.857 10:16:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:53.857 10:16:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:53.857 Cannot find device "nvmf_tgt_br" 00:15:53.857 10:16:13 -- nvmf/common.sh@154 -- # true 00:15:53.857 10:16:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.857 Cannot find device "nvmf_tgt_br2" 00:15:53.857 10:16:13 -- nvmf/common.sh@155 -- # true 00:15:53.857 10:16:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:53.857 10:16:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:53.857 Cannot find device "nvmf_tgt_br" 00:15:53.858 10:16:13 -- nvmf/common.sh@157 -- # true 00:15:53.858 10:16:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:53.858 Cannot find device "nvmf_tgt_br2" 00:15:53.858 10:16:13 -- nvmf/common.sh@158 -- # true 00:15:53.858 10:16:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:54.117 10:16:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:54.117 10:16:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.117 10:16:13 -- nvmf/common.sh@161 -- # true 00:15:54.117 10:16:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.117 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.117 10:16:13 -- nvmf/common.sh@162 -- # true 00:15:54.117 10:16:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:54.117 10:16:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:54.117 10:16:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:54.117 10:16:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:54.117 10:16:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:54.117 10:16:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:54.117 10:16:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:54.117 10:16:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:54.117 10:16:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:54.117 10:16:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:54.117 10:16:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:54.117 10:16:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:54.117 10:16:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:54.117 10:16:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:54.117 10:16:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:54.117 10:16:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:54.117 10:16:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:54.117 10:16:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:54.117 10:16:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:54.117 10:16:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:54.117 10:16:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:54.117 10:16:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:54.117 10:16:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:54.117 10:16:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:54.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:15:54.117 00:15:54.117 --- 10.0.0.2 ping statistics --- 00:15:54.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.117 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:15:54.117 10:16:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:54.117 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:54.117 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:15:54.117 00:15:54.117 --- 10.0.0.3 ping statistics --- 00:15:54.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.117 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:54.117 10:16:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:54.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:15:54.117 00:15:54.117 --- 10.0.0.1 ping statistics --- 00:15:54.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.117 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:54.117 10:16:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.117 10:16:13 -- nvmf/common.sh@421 -- # return 0 00:15:54.117 10:16:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:54.117 10:16:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.117 10:16:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:54.117 10:16:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:54.117 10:16:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.117 10:16:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:54.118 10:16:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:54.377 10:16:13 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:54.377 10:16:13 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:54.377 10:16:13 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:54.377 10:16:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:54.377 10:16:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:54.377 10:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:54.377 10:16:13 -- nvmf/common.sh@469 -- # nvmfpid=85206 00:15:54.377 10:16:13 -- nvmf/common.sh@470 -- # waitforlisten 85206 00:15:54.377 10:16:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:54.377 10:16:13 -- common/autotest_common.sh@829 -- # '[' -z 85206 ']' 00:15:54.377 10:16:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.377 10:16:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.377 10:16:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.377 10:16:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.377 10:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:54.377 [2024-11-19 10:16:13.734506] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:54.377 [2024-11-19 10:16:13.734599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.377 [2024-11-19 10:16:13.869406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:54.377 [2024-11-19 10:16:13.911483] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:54.377 [2024-11-19 10:16:13.911644] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.377 [2024-11-19 10:16:13.911658] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.377 [2024-11-19 10:16:13.911667] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.377 [2024-11-19 10:16:13.911917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.377 [2024-11-19 10:16:13.912226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.377 [2024-11-19 10:16:13.912294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:54.377 [2024-11-19 10:16:13.912299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.635 10:16:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.635 10:16:14 -- common/autotest_common.sh@862 -- # return 0 00:15:54.635 10:16:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:54.635 10:16:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:54.635 10:16:14 -- common/autotest_common.sh@10 -- # set +x 00:15:54.635 10:16:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.635 10:16:14 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:54.894 [2024-11-19 10:16:14.356496] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.894 10:16:14 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:55.461 Malloc0 00:15:55.461 10:16:14 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:55.461 10:16:15 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:56.027 10:16:15 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.027 [2024-11-19 10:16:15.533950] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.027 10:16:15 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:56.285 [2024-11-19 10:16:15.802327] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:56.285 10:16:15 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:56.544 10:16:16 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:56.802 10:16:16 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:56.802 10:16:16 -- common/autotest_common.sh@1187 -- # local i=0 00:15:56.802 10:16:16 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:56.802 10:16:16 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:56.802 10:16:16 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:58.704 10:16:18 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:58.704 10:16:18 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:58.704 10:16:18 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:58.962 10:16:18 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:58.962 10:16:18 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:58.962 10:16:18 -- common/autotest_common.sh@1197 -- # return 0 00:15:58.962 10:16:18 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:58.962 10:16:18 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:58.962 10:16:18 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:58.962 10:16:18 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:58.962 10:16:18 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:58.962 10:16:18 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:58.962 10:16:18 -- target/multipath.sh@38 -- # return 0 00:15:58.962 10:16:18 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:58.962 10:16:18 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:58.962 10:16:18 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:58.962 10:16:18 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:58.962 10:16:18 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:58.962 10:16:18 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:58.962 10:16:18 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:58.962 10:16:18 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:58.962 10:16:18 -- target/multipath.sh@22 -- # local timeout=20 00:15:58.962 10:16:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:58.962 10:16:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:58.962 10:16:18 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:58.962 10:16:18 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:58.962 10:16:18 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:58.962 10:16:18 -- target/multipath.sh@22 -- # local timeout=20 00:15:58.962 10:16:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:58.962 10:16:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:58.962 10:16:18 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:58.962 10:16:18 -- target/multipath.sh@85 -- # echo numa 00:15:58.962 10:16:18 -- target/multipath.sh@88 -- # fio_pid=85337 00:15:58.962 10:16:18 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:58.962 10:16:18 -- target/multipath.sh@90 -- # sleep 1 00:15:58.962 [global] 00:15:58.962 thread=1 00:15:58.962 invalidate=1 00:15:58.962 rw=randrw 00:15:58.962 time_based=1 00:15:58.962 runtime=6 00:15:58.962 ioengine=libaio 00:15:58.962 direct=1 00:15:58.962 bs=4096 00:15:58.962 iodepth=128 00:15:58.962 norandommap=0 00:15:58.962 numjobs=1 00:15:58.962 00:15:58.962 verify_dump=1 00:15:58.962 verify_backlog=512 00:15:58.962 verify_state_save=0 00:15:58.962 do_verify=1 00:15:58.962 verify=crc32c-intel 00:15:58.962 [job0] 00:15:58.962 filename=/dev/nvme0n1 00:15:58.962 Could not set queue depth (nvme0n1) 00:15:58.962 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.962 fio-3.35 00:15:58.962 Starting 1 thread 00:15:59.896 10:16:19 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:00.155 10:16:19 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:00.413 10:16:19 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:16:00.413 10:16:19 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:16:00.413 10:16:19 -- target/multipath.sh@22 -- # local timeout=20 00:16:00.413 10:16:19 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:00.413 10:16:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:00.413 10:16:19 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:00.413 10:16:19 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:16:00.413 10:16:19 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:16:00.413 10:16:19 -- target/multipath.sh@22 -- # local timeout=20 00:16:00.413 10:16:19 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:00.413 10:16:19 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:00.413 10:16:19 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:00.413 10:16:19 -- target/multipath.sh@25 -- # sleep 1s 00:16:01.349 10:16:20 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:01.349 10:16:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:01.349 10:16:20 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:01.349 10:16:20 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:01.611 10:16:21 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:02.224 10:16:21 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:16:02.224 10:16:21 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:16:02.224 10:16:21 -- target/multipath.sh@22 -- # local timeout=20 00:16:02.224 10:16:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:02.224 10:16:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:02.224 10:16:21 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:02.224 10:16:21 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:16:02.224 10:16:21 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:16:02.224 10:16:21 -- target/multipath.sh@22 -- # local timeout=20 00:16:02.224 10:16:21 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:02.224 10:16:21 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:02.224 10:16:21 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:02.224 10:16:21 -- target/multipath.sh@25 -- # sleep 1s 00:16:03.159 10:16:22 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:03.159 10:16:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:03.159 10:16:22 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:03.159 10:16:22 -- target/multipath.sh@104 -- # wait 85337 00:16:05.074 00:16:05.074 job0: (groupid=0, jobs=1): err= 0: pid=85363: Tue Nov 19 10:16:24 2024 00:16:05.074 read: IOPS=10.8k, BW=42.4MiB/s (44.4MB/s)(254MiB/6006msec) 00:16:05.074 slat (usec): min=3, max=8602, avg=52.16, stdev=242.98 00:16:05.074 clat (usec): min=1077, max=19759, avg=8075.18, stdev=1442.24 00:16:05.074 lat (usec): min=1098, max=19781, avg=8127.34, stdev=1453.38 00:16:05.074 clat percentiles (usec): 00:16:05.074 | 1.00th=[ 4621], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 6980], 00:16:05.074 | 30.00th=[ 7373], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8356], 00:16:05.074 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9765], 95.00th=[10683], 00:16:05.074 | 99.00th=[12387], 99.50th=[13173], 99.90th=[14091], 99.95th=[14353], 00:16:05.074 | 99.99th=[14746] 00:16:05.074 bw ( KiB/s): min= 8296, max=30048, per=52.34%, avg=22706.18, stdev=5832.91, samples=11 00:16:05.074 iops : min= 2074, max= 7512, avg=5676.55, stdev=1458.23, samples=11 00:16:05.074 write: IOPS=6305, BW=24.6MiB/s (25.8MB/s)(134MiB/5454msec); 0 zone resets 00:16:05.074 slat (usec): min=12, max=2951, avg=64.59, stdev=156.10 00:16:05.074 clat (usec): min=481, max=14268, avg=6847.73, stdev=1224.78 00:16:05.074 lat (usec): min=532, max=14296, avg=6912.32, stdev=1229.75 00:16:05.074 clat percentiles (usec): 00:16:05.074 | 1.00th=[ 3687], 5.00th=[ 4686], 10.00th=[ 5538], 20.00th=[ 6128], 00:16:05.074 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6849], 60.00th=[ 7046], 00:16:05.074 | 70.00th=[ 7242], 80.00th=[ 7570], 90.00th=[ 8225], 95.00th=[ 8848], 00:16:05.074 | 99.00th=[10421], 99.50th=[11076], 99.90th=[12387], 99.95th=[12649], 00:16:05.074 | 99.99th=[14222] 00:16:05.074 bw ( KiB/s): min= 8816, max=29168, per=90.14%, avg=22736.00, stdev=5582.01, samples=11 00:16:05.074 iops : min= 2204, max= 7292, avg=5684.00, stdev=1395.50, samples=11 00:16:05.074 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:16:05.074 lat (msec) : 2=0.06%, 4=0.81%, 10=92.96%, 20=6.15% 00:16:05.074 cpu : usr=5.63%, sys=24.36%, ctx=6300, majf=0, minf=102 00:16:05.074 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:05.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:05.074 issued rwts: total=65139,34391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.074 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:05.074 00:16:05.074 Run status group 0 (all jobs): 00:16:05.075 READ: bw=42.4MiB/s (44.4MB/s), 42.4MiB/s-42.4MiB/s (44.4MB/s-44.4MB/s), io=254MiB (267MB), run=6006-6006msec 00:16:05.075 WRITE: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=134MiB (141MB), run=5454-5454msec 00:16:05.075 00:16:05.075 Disk stats (read/write): 00:16:05.075 nvme0n1: ios=64269/33814, merge=0/0, ticks=484030/215291, in_queue=699321, util=98.67% 00:16:05.075 10:16:24 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:05.642 10:16:24 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:05.900 10:16:25 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:16:05.900 10:16:25 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:16:05.900 10:16:25 -- target/multipath.sh@22 -- # local timeout=20 00:16:05.900 10:16:25 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:05.900 10:16:25 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:05.900 10:16:25 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:05.900 10:16:25 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:16:05.900 10:16:25 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:16:05.900 10:16:25 -- target/multipath.sh@22 -- # local timeout=20 00:16:05.900 10:16:25 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:05.900 10:16:25 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:05.900 10:16:25 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:16:05.900 10:16:25 -- target/multipath.sh@25 -- # sleep 1s 00:16:06.837 10:16:26 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:06.837 10:16:26 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:06.837 10:16:26 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:06.837 10:16:26 -- target/multipath.sh@113 -- # echo round-robin 00:16:06.837 10:16:26 -- target/multipath.sh@116 -- # fio_pid=85488 00:16:06.837 10:16:26 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:16:06.837 10:16:26 -- target/multipath.sh@118 -- # sleep 1 00:16:06.837 [global] 00:16:06.837 thread=1 00:16:06.837 invalidate=1 00:16:06.837 rw=randrw 00:16:06.837 time_based=1 00:16:06.837 runtime=6 00:16:06.837 ioengine=libaio 00:16:06.837 direct=1 00:16:06.837 bs=4096 00:16:06.837 iodepth=128 00:16:06.837 norandommap=0 00:16:06.837 numjobs=1 00:16:06.837 00:16:06.837 verify_dump=1 00:16:06.837 verify_backlog=512 00:16:06.837 verify_state_save=0 00:16:06.837 do_verify=1 00:16:06.837 verify=crc32c-intel 00:16:06.837 [job0] 00:16:06.837 filename=/dev/nvme0n1 00:16:06.837 Could not set queue depth (nvme0n1) 00:16:07.095 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:07.095 fio-3.35 00:16:07.095 Starting 1 thread 00:16:08.030 10:16:27 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:08.288 10:16:27 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:08.547 10:16:27 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:16:08.547 10:16:27 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:16:08.547 10:16:27 -- target/multipath.sh@22 -- # local timeout=20 00:16:08.547 10:16:27 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:08.547 10:16:27 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:08.547 10:16:27 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:08.547 10:16:27 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:16:08.547 10:16:27 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:16:08.547 10:16:27 -- target/multipath.sh@22 -- # local timeout=20 00:16:08.547 10:16:27 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:08.547 10:16:27 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:08.547 10:16:27 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:08.547 10:16:27 -- target/multipath.sh@25 -- # sleep 1s 00:16:09.482 10:16:28 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:09.482 10:16:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:09.482 10:16:28 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:09.482 10:16:28 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:09.740 10:16:29 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:09.998 10:16:29 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:16:09.998 10:16:29 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:16:09.998 10:16:29 -- target/multipath.sh@22 -- # local timeout=20 00:16:09.998 10:16:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:09.998 10:16:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:09.998 10:16:29 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:09.998 10:16:29 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:16:09.998 10:16:29 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:16:09.998 10:16:29 -- target/multipath.sh@22 -- # local timeout=20 00:16:09.998 10:16:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:09.998 10:16:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:09.998 10:16:29 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:09.998 10:16:29 -- target/multipath.sh@25 -- # sleep 1s 00:16:11.374 10:16:30 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:16:11.374 10:16:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:11.374 10:16:30 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:11.374 10:16:30 -- target/multipath.sh@132 -- # wait 85488 00:16:13.275 00:16:13.275 job0: (groupid=0, jobs=1): err= 0: pid=85519: Tue Nov 19 10:16:32 2024 00:16:13.275 read: IOPS=12.5k, BW=48.8MiB/s (51.2MB/s)(293MiB/6002msec) 00:16:13.275 slat (usec): min=3, max=6144, avg=40.11, stdev=198.92 00:16:13.275 clat (usec): min=515, max=17047, avg=7099.22, stdev=1797.88 00:16:13.275 lat (usec): min=528, max=17055, avg=7139.33, stdev=1811.65 00:16:13.275 clat percentiles (usec): 00:16:13.275 | 1.00th=[ 2671], 5.00th=[ 4146], 10.00th=[ 4752], 20.00th=[ 5604], 00:16:13.275 | 30.00th=[ 6325], 40.00th=[ 6849], 50.00th=[ 7177], 60.00th=[ 7504], 00:16:13.275 | 70.00th=[ 7963], 80.00th=[ 8455], 90.00th=[ 9110], 95.00th=[10028], 00:16:13.275 | 99.00th=[11731], 99.50th=[12518], 99.90th=[14746], 99.95th=[15008], 00:16:13.275 | 99.99th=[15795] 00:16:13.275 bw ( KiB/s): min=14608, max=48504, per=58.26%, avg=29122.40, stdev=9491.56, samples=10 00:16:13.275 iops : min= 3652, max=12126, avg=7280.60, stdev=2372.89, samples=10 00:16:13.275 write: IOPS=7770, BW=30.4MiB/s (31.8MB/s)(154MiB/5070msec); 0 zone resets 00:16:13.275 slat (usec): min=4, max=2615, avg=53.03, stdev=116.36 00:16:13.275 clat (usec): min=310, max=15310, avg=5803.11, stdev=1761.45 00:16:13.275 lat (usec): min=356, max=15338, avg=5856.14, stdev=1773.01 00:16:13.275 clat percentiles (usec): 00:16:13.275 | 1.00th=[ 2024], 5.00th=[ 2999], 10.00th=[ 3425], 20.00th=[ 4047], 00:16:13.275 | 30.00th=[ 4686], 40.00th=[ 5473], 50.00th=[ 6128], 60.00th=[ 6521], 00:16:13.275 | 70.00th=[ 6849], 80.00th=[ 7177], 90.00th=[ 7701], 95.00th=[ 8455], 00:16:13.275 | 99.00th=[10159], 99.50th=[10683], 99.90th=[12649], 99.95th=[13304], 00:16:13.275 | 99.99th=[14091] 00:16:13.275 bw ( KiB/s): min=14472, max=47568, per=93.49%, avg=29057.60, stdev=9255.57, samples=10 00:16:13.275 iops : min= 3618, max=11892, avg=7264.40, stdev=2313.89, samples=10 00:16:13.275 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.08% 00:16:13.275 lat (msec) : 2=0.55%, 4=8.62%, 10=86.97%, 20=3.75% 00:16:13.275 cpu : usr=6.35%, sys=28.94%, ctx=8703, majf=0, minf=199 00:16:13.275 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:13.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.275 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:13.275 issued rwts: total=75008,39394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.275 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:13.275 00:16:13.275 Run status group 0 (all jobs): 00:16:13.275 READ: bw=48.8MiB/s (51.2MB/s), 48.8MiB/s-48.8MiB/s (51.2MB/s-51.2MB/s), io=293MiB (307MB), run=6002-6002msec 00:16:13.275 WRITE: bw=30.4MiB/s (31.8MB/s), 30.4MiB/s-30.4MiB/s (31.8MB/s-31.8MB/s), io=154MiB (161MB), run=5070-5070msec 00:16:13.275 00:16:13.275 Disk stats (read/write): 00:16:13.275 nvme0n1: ios=73421/39356, merge=0/0, ticks=474018/200061, in_queue=674079, util=98.58% 00:16:13.275 10:16:32 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:13.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:13.275 10:16:32 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:13.275 10:16:32 -- common/autotest_common.sh@1208 -- # local i=0 00:16:13.275 10:16:32 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:13.275 10:16:32 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.275 10:16:32 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:13.275 10:16:32 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.275 10:16:32 -- common/autotest_common.sh@1220 -- # return 0 00:16:13.275 10:16:32 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:13.534 10:16:32 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:16:13.534 10:16:32 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:16:13.534 10:16:32 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:16:13.534 10:16:32 -- target/multipath.sh@144 -- # nvmftestfini 00:16:13.534 10:16:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:13.534 10:16:32 -- nvmf/common.sh@116 -- # sync 00:16:13.534 10:16:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:13.534 10:16:32 -- nvmf/common.sh@119 -- # set +e 00:16:13.534 10:16:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:13.534 10:16:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:13.534 rmmod nvme_tcp 00:16:13.534 rmmod nvme_fabrics 00:16:13.534 rmmod nvme_keyring 00:16:13.534 10:16:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:13.534 10:16:33 -- nvmf/common.sh@123 -- # set -e 00:16:13.534 10:16:33 -- nvmf/common.sh@124 -- # return 0 00:16:13.534 10:16:33 -- nvmf/common.sh@477 -- # '[' -n 85206 ']' 00:16:13.534 10:16:33 -- nvmf/common.sh@478 -- # killprocess 85206 00:16:13.534 10:16:33 -- common/autotest_common.sh@936 -- # '[' -z 85206 ']' 00:16:13.534 10:16:33 -- common/autotest_common.sh@940 -- # kill -0 85206 00:16:13.534 10:16:33 -- common/autotest_common.sh@941 -- # uname 00:16:13.534 10:16:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:13.534 10:16:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85206 00:16:13.534 10:16:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:13.534 10:16:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:13.534 killing process with pid 85206 00:16:13.534 10:16:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85206' 00:16:13.534 10:16:33 -- common/autotest_common.sh@955 -- # kill 85206 00:16:13.534 10:16:33 -- common/autotest_common.sh@960 -- # wait 85206 00:16:13.792 10:16:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:13.792 10:16:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:13.792 10:16:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:13.792 10:16:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:13.792 10:16:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:13.792 10:16:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.792 10:16:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.792 10:16:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.792 10:16:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:13.792 00:16:13.792 real 0m20.158s 00:16:13.792 user 1m19.211s 00:16:13.792 sys 0m6.842s 00:16:13.792 10:16:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:13.792 10:16:33 -- common/autotest_common.sh@10 -- # set +x 00:16:13.792 ************************************ 00:16:13.792 END TEST nvmf_multipath 00:16:13.792 ************************************ 00:16:13.792 10:16:33 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:13.792 10:16:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:13.792 10:16:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:13.792 10:16:33 -- common/autotest_common.sh@10 -- # set +x 00:16:13.792 ************************************ 00:16:13.792 START TEST nvmf_zcopy 00:16:13.792 ************************************ 00:16:13.792 10:16:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:14.051 * Looking for test storage... 00:16:14.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:14.051 10:16:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:14.051 10:16:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:14.051 10:16:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:14.051 10:16:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:14.051 10:16:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:14.051 10:16:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:14.051 10:16:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:14.051 10:16:33 -- scripts/common.sh@335 -- # IFS=.-: 00:16:14.051 10:16:33 -- scripts/common.sh@335 -- # read -ra ver1 00:16:14.051 10:16:33 -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.051 10:16:33 -- scripts/common.sh@336 -- # read -ra ver2 00:16:14.051 10:16:33 -- scripts/common.sh@337 -- # local 'op=<' 00:16:14.051 10:16:33 -- scripts/common.sh@339 -- # ver1_l=2 00:16:14.051 10:16:33 -- scripts/common.sh@340 -- # ver2_l=1 00:16:14.051 10:16:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:14.051 10:16:33 -- scripts/common.sh@343 -- # case "$op" in 00:16:14.051 10:16:33 -- scripts/common.sh@344 -- # : 1 00:16:14.051 10:16:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:14.051 10:16:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.051 10:16:33 -- scripts/common.sh@364 -- # decimal 1 00:16:14.051 10:16:33 -- scripts/common.sh@352 -- # local d=1 00:16:14.051 10:16:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.052 10:16:33 -- scripts/common.sh@354 -- # echo 1 00:16:14.052 10:16:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:14.052 10:16:33 -- scripts/common.sh@365 -- # decimal 2 00:16:14.052 10:16:33 -- scripts/common.sh@352 -- # local d=2 00:16:14.052 10:16:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.052 10:16:33 -- scripts/common.sh@354 -- # echo 2 00:16:14.052 10:16:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:14.052 10:16:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:14.052 10:16:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:14.052 10:16:33 -- scripts/common.sh@367 -- # return 0 00:16:14.052 10:16:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.052 10:16:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:14.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.052 --rc genhtml_branch_coverage=1 00:16:14.052 --rc genhtml_function_coverage=1 00:16:14.052 --rc genhtml_legend=1 00:16:14.052 --rc geninfo_all_blocks=1 00:16:14.052 --rc geninfo_unexecuted_blocks=1 00:16:14.052 00:16:14.052 ' 00:16:14.052 10:16:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:14.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.052 --rc genhtml_branch_coverage=1 00:16:14.052 --rc genhtml_function_coverage=1 00:16:14.052 --rc genhtml_legend=1 00:16:14.052 --rc geninfo_all_blocks=1 00:16:14.052 --rc geninfo_unexecuted_blocks=1 00:16:14.052 00:16:14.052 ' 00:16:14.052 10:16:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:14.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.052 --rc genhtml_branch_coverage=1 00:16:14.052 --rc genhtml_function_coverage=1 00:16:14.052 --rc genhtml_legend=1 00:16:14.052 --rc geninfo_all_blocks=1 00:16:14.052 --rc geninfo_unexecuted_blocks=1 00:16:14.052 00:16:14.052 ' 00:16:14.052 10:16:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:14.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.052 --rc genhtml_branch_coverage=1 00:16:14.052 --rc genhtml_function_coverage=1 00:16:14.052 --rc genhtml_legend=1 00:16:14.052 --rc geninfo_all_blocks=1 00:16:14.052 --rc geninfo_unexecuted_blocks=1 00:16:14.052 00:16:14.052 ' 00:16:14.052 10:16:33 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:14.052 10:16:33 -- nvmf/common.sh@7 -- # uname -s 00:16:14.052 10:16:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.052 10:16:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.052 10:16:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.052 10:16:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.052 10:16:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.052 10:16:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.052 10:16:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.052 10:16:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.052 10:16:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.052 10:16:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.052 10:16:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:16:14.052 10:16:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:16:14.052 10:16:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.052 10:16:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.052 10:16:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:14.052 10:16:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:14.052 10:16:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.052 10:16:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.052 10:16:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.052 10:16:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.052 10:16:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.052 10:16:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.052 10:16:33 -- paths/export.sh@5 -- # export PATH 00:16:14.052 10:16:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.052 10:16:33 -- nvmf/common.sh@46 -- # : 0 00:16:14.052 10:16:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:14.052 10:16:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:14.052 10:16:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:14.052 10:16:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.052 10:16:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.052 10:16:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:14.052 10:16:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:14.052 10:16:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:14.052 10:16:33 -- target/zcopy.sh@12 -- # nvmftestinit 00:16:14.052 10:16:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:14.052 10:16:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.052 10:16:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:14.052 10:16:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:14.052 10:16:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:14.052 10:16:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.052 10:16:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.052 10:16:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.052 10:16:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:14.052 10:16:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:14.052 10:16:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:14.052 10:16:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:14.052 10:16:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:14.052 10:16:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:14.052 10:16:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.052 10:16:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.052 10:16:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:14.052 10:16:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:14.052 10:16:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:14.052 10:16:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:14.052 10:16:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:14.052 10:16:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.052 10:16:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:14.052 10:16:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:14.052 10:16:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:14.052 10:16:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:14.052 10:16:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:14.052 10:16:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:14.052 Cannot find device "nvmf_tgt_br" 00:16:14.052 10:16:33 -- nvmf/common.sh@154 -- # true 00:16:14.052 10:16:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:14.052 Cannot find device "nvmf_tgt_br2" 00:16:14.052 10:16:33 -- nvmf/common.sh@155 -- # true 00:16:14.052 10:16:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:14.052 10:16:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:14.052 Cannot find device "nvmf_tgt_br" 00:16:14.052 10:16:33 -- nvmf/common.sh@157 -- # true 00:16:14.052 10:16:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:14.052 Cannot find device "nvmf_tgt_br2" 00:16:14.052 10:16:33 -- nvmf/common.sh@158 -- # true 00:16:14.052 10:16:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:14.327 10:16:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:14.327 10:16:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:14.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.327 10:16:33 -- nvmf/common.sh@161 -- # true 00:16:14.327 10:16:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:14.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.327 10:16:33 -- nvmf/common.sh@162 -- # true 00:16:14.327 10:16:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:14.327 10:16:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:14.327 10:16:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:14.327 10:16:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:14.327 10:16:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:14.327 10:16:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:14.327 10:16:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:14.327 10:16:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:14.327 10:16:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:14.327 10:16:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:14.327 10:16:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:14.327 10:16:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:14.327 10:16:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:14.327 10:16:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:14.327 10:16:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:14.327 10:16:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:14.327 10:16:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:14.327 10:16:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:14.327 10:16:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:14.327 10:16:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:14.327 10:16:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:14.327 10:16:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:14.327 10:16:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:14.327 10:16:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:14.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:14.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:16:14.327 00:16:14.327 --- 10.0.0.2 ping statistics --- 00:16:14.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.327 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:14.327 10:16:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:14.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:14.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:16:14.327 00:16:14.327 --- 10.0.0.3 ping statistics --- 00:16:14.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.327 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:14.327 10:16:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:14.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:14.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:14.327 00:16:14.327 --- 10.0.0.1 ping statistics --- 00:16:14.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:14.327 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:14.327 10:16:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:14.327 10:16:33 -- nvmf/common.sh@421 -- # return 0 00:16:14.327 10:16:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:14.327 10:16:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:14.327 10:16:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:14.327 10:16:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:14.327 10:16:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:14.327 10:16:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:14.327 10:16:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:14.327 10:16:33 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:14.327 10:16:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:14.327 10:16:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:14.327 10:16:33 -- common/autotest_common.sh@10 -- # set +x 00:16:14.327 10:16:33 -- nvmf/common.sh@469 -- # nvmfpid=85802 00:16:14.327 10:16:33 -- nvmf/common.sh@470 -- # waitforlisten 85802 00:16:14.327 10:16:33 -- common/autotest_common.sh@829 -- # '[' -z 85802 ']' 00:16:14.327 10:16:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.327 10:16:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:14.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.327 10:16:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.327 10:16:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:14.327 10:16:33 -- common/autotest_common.sh@10 -- # set +x 00:16:14.327 10:16:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:14.622 [2024-11-19 10:16:33.901330] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:14.622 [2024-11-19 10:16:33.901427] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.622 [2024-11-19 10:16:34.037630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.622 [2024-11-19 10:16:34.073039] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:14.622 [2024-11-19 10:16:34.073172] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.622 [2024-11-19 10:16:34.073184] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.622 [2024-11-19 10:16:34.073193] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.622 [2024-11-19 10:16:34.073218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.622 10:16:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.622 10:16:34 -- common/autotest_common.sh@862 -- # return 0 00:16:14.622 10:16:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:14.622 10:16:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:14.622 10:16:34 -- common/autotest_common.sh@10 -- # set +x 00:16:14.881 10:16:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.881 10:16:34 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:14.881 10:16:34 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:14.881 10:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.881 10:16:34 -- common/autotest_common.sh@10 -- # set +x 00:16:14.881 [2024-11-19 10:16:34.193794] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.881 10:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.881 10:16:34 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:14.881 10:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.881 10:16:34 -- common/autotest_common.sh@10 -- # set +x 00:16:14.881 10:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.881 10:16:34 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.881 10:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.881 10:16:34 -- common/autotest_common.sh@10 -- # set +x 00:16:14.881 [2024-11-19 10:16:34.209907] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.881 10:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.881 10:16:34 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:14.881 10:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.882 10:16:34 -- common/autotest_common.sh@10 -- # set +x 00:16:14.882 10:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.882 10:16:34 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:14.882 10:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.882 10:16:34 -- common/autotest_common.sh@10 -- # set +x 00:16:14.882 malloc0 00:16:14.882 10:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.882 10:16:34 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:14.882 10:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.882 10:16:34 -- common/autotest_common.sh@10 -- # set +x 00:16:14.882 10:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.882 10:16:34 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:14.882 10:16:34 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:14.882 10:16:34 -- nvmf/common.sh@520 -- # config=() 00:16:14.882 10:16:34 -- nvmf/common.sh@520 -- # local subsystem config 00:16:14.882 10:16:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:14.882 10:16:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:14.882 { 00:16:14.882 "params": { 00:16:14.882 "name": "Nvme$subsystem", 00:16:14.882 "trtype": "$TEST_TRANSPORT", 00:16:14.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:14.882 "adrfam": "ipv4", 00:16:14.882 "trsvcid": "$NVMF_PORT", 00:16:14.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:14.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:14.882 "hdgst": ${hdgst:-false}, 00:16:14.882 "ddgst": ${ddgst:-false} 00:16:14.882 }, 00:16:14.882 "method": "bdev_nvme_attach_controller" 00:16:14.882 } 00:16:14.882 EOF 00:16:14.882 )") 00:16:14.882 10:16:34 -- nvmf/common.sh@542 -- # cat 00:16:14.882 10:16:34 -- nvmf/common.sh@544 -- # jq . 00:16:14.882 10:16:34 -- nvmf/common.sh@545 -- # IFS=, 00:16:14.882 10:16:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:14.882 "params": { 00:16:14.882 "name": "Nvme1", 00:16:14.882 "trtype": "tcp", 00:16:14.882 "traddr": "10.0.0.2", 00:16:14.882 "adrfam": "ipv4", 00:16:14.882 "trsvcid": "4420", 00:16:14.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:14.882 "hdgst": false, 00:16:14.882 "ddgst": false 00:16:14.882 }, 00:16:14.882 "method": "bdev_nvme_attach_controller" 00:16:14.882 }' 00:16:14.882 [2024-11-19 10:16:34.295448] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:14.882 [2024-11-19 10:16:34.295542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85834 ] 00:16:15.140 [2024-11-19 10:16:34.432916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.140 [2024-11-19 10:16:34.479245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.140 Running I/O for 10 seconds... 00:16:25.113 00:16:25.113 Latency(us) 00:16:25.113 [2024-11-19T10:16:44.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.113 [2024-11-19T10:16:44.659Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:25.113 Verification LBA range: start 0x0 length 0x1000 00:16:25.113 Nvme1n1 : 10.01 8922.89 69.71 0.00 0.00 14306.84 2010.76 20614.05 00:16:25.113 [2024-11-19T10:16:44.659Z] =================================================================================================================== 00:16:25.113 [2024-11-19T10:16:44.660Z] Total : 8922.89 69.71 0.00 0.00 14306.84 2010.76 20614.05 00:16:25.373 10:16:44 -- target/zcopy.sh@39 -- # perfpid=85957 00:16:25.373 10:16:44 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:25.373 10:16:44 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:25.373 10:16:44 -- common/autotest_common.sh@10 -- # set +x 00:16:25.373 10:16:44 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:25.373 10:16:44 -- nvmf/common.sh@520 -- # config=() 00:16:25.373 10:16:44 -- nvmf/common.sh@520 -- # local subsystem config 00:16:25.373 10:16:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:25.373 10:16:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:25.373 { 00:16:25.373 "params": { 00:16:25.373 "name": "Nvme$subsystem", 00:16:25.373 "trtype": "$TEST_TRANSPORT", 00:16:25.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.373 "adrfam": "ipv4", 00:16:25.373 "trsvcid": "$NVMF_PORT", 00:16:25.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.373 "hdgst": ${hdgst:-false}, 00:16:25.373 "ddgst": ${ddgst:-false} 00:16:25.373 }, 00:16:25.373 "method": "bdev_nvme_attach_controller" 00:16:25.373 } 00:16:25.373 EOF 00:16:25.373 )") 00:16:25.373 10:16:44 -- nvmf/common.sh@542 -- # cat 00:16:25.373 [2024-11-19 10:16:44.790130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.373 [2024-11-19 10:16:44.790170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.373 10:16:44 -- nvmf/common.sh@544 -- # jq . 00:16:25.373 10:16:44 -- nvmf/common.sh@545 -- # IFS=, 00:16:25.373 10:16:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:25.373 "params": { 00:16:25.373 "name": "Nvme1", 00:16:25.373 "trtype": "tcp", 00:16:25.373 "traddr": "10.0.0.2", 00:16:25.373 "adrfam": "ipv4", 00:16:25.373 "trsvcid": "4420", 00:16:25.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:25.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:25.373 "hdgst": false, 00:16:25.373 "ddgst": false 00:16:25.373 }, 00:16:25.373 "method": "bdev_nvme_attach_controller" 00:16:25.373 }' 00:16:25.373 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.373 [2024-11-19 10:16:44.802107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.373 [2024-11-19 10:16:44.802137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.373 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.373 [2024-11-19 10:16:44.810093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.373 [2024-11-19 10:16:44.810120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.373 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.373 [2024-11-19 10:16:44.821755] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:25.373 [2024-11-19 10:16:44.821845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85957 ] 00:16:25.373 [2024-11-19 10:16:44.822103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.373 [2024-11-19 10:16:44.822126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.373 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.373 [2024-11-19 10:16:44.834104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.373 [2024-11-19 10:16:44.834130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.373 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.373 [2024-11-19 10:16:44.846110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.373 [2024-11-19 10:16:44.846138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.373 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.373 [2024-11-19 10:16:44.854105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.373 [2024-11-19 10:16:44.854134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.373 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.373 [2024-11-19 10:16:44.866118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.373 [2024-11-19 10:16:44.866145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.373 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.373 [2024-11-19 10:16:44.878127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.373 [2024-11-19 10:16:44.878154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.373 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.373 [2024-11-19 10:16:44.890125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.373 [2024-11-19 10:16:44.890152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.373 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.373 [2024-11-19 10:16:44.902135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.373 [2024-11-19 10:16:44.902164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.373 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.373 [2024-11-19 10:16:44.914147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.373 [2024-11-19 10:16:44.914175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.633 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.633 [2024-11-19 10:16:44.926133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.633 [2024-11-19 10:16:44.926159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.633 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.633 [2024-11-19 10:16:44.938139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.633 [2024-11-19 10:16:44.938164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.633 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.633 [2024-11-19 10:16:44.950146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.633 [2024-11-19 10:16:44.950173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.633 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.633 [2024-11-19 10:16:44.959042] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.633 [2024-11-19 10:16:44.962173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.633 [2024-11-19 10:16:44.962209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.633 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.633 [2024-11-19 10:16:44.974186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.633 [2024-11-19 10:16:44.974227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.634 [2024-11-19 10:16:44.986193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.634 [2024-11-19 10:16:44.986235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 2024/11/19 10:16:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.634 [2024-11-19 10:16:44.998183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.634 [2024-11-19 10:16:44.998216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 [2024-11-19 10:16:44.998514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.634 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.634 [2024-11-19 10:16:45.010178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.634 [2024-11-19 10:16:45.010210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.634 [2024-11-19 10:16:45.022199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.634 [2024-11-19 10:16:45.022238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.634 [2024-11-19 10:16:45.034199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.634 [2024-11-19 10:16:45.034237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.634 [2024-11-19 10:16:45.046202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.634 [2024-11-19 10:16:45.046238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.634 [2024-11-19 10:16:45.058191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.634 [2024-11-19 10:16:45.058225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.634 [2024-11-19 10:16:45.070218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.634 [2024-11-19 10:16:45.070250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.634 [2024-11-19 10:16:45.082230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.634 [2024-11-19 10:16:45.082258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.634 [2024-11-19 10:16:45.094238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.634 [2024-11-19 10:16:45.094268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.634 [2024-11-19 10:16:45.106239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.634 [2024-11-19 10:16:45.106268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.634 [2024-11-19 10:16:45.114237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.634 [2024-11-19 10:16:45.114265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.634 [2024-11-19 10:16:45.126255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.634 [2024-11-19 10:16:45.126285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.634 [2024-11-19 10:16:45.134241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.634 [2024-11-19 10:16:45.134267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 Running I/O for 5 seconds... 00:16:25.634 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.634 [2024-11-19 10:16:45.150588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.634 [2024-11-19 10:16:45.150625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.634 [2024-11-19 10:16:45.167856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.634 [2024-11-19 10:16:45.167892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.634 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.178139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.178176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.193517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.193551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.210751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.210786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.227197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.227233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.244787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.244837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.261674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.261710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.277734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.277771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.296304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.296342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.311129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.311164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.327508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.327544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.343698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.343735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.353882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.353916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.367904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.367937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.377572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.377605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.388141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.388174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.400791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.400840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.417607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.417645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:25.894 [2024-11-19 10:16:45.427793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:25.894 [2024-11-19 10:16:45.427839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:25.894 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.441723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.441759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.457652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.457690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.475743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.475785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.489908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.489944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.499679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.499713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.510423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.510457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.522910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.522953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.541660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.541695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.556217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.556253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.566279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.566312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.576769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.576803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.587080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.587115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.597483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.597516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.612272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.612304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.629175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.629209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.644732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.644771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.654258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.654291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.665202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.665235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.682127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.682162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.154 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.154 [2024-11-19 10:16:45.697610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.154 [2024-11-19 10:16:45.697647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.713689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.713738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.723957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.723989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.738598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.738630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.748154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.748184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.758693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.758726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.771714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.771746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.788954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.788985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.806785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.806836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.821129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.821164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.835750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.835785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.851774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.851809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.869019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.869054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.883763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.883796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.898591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.898625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.914916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.914958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.932124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.932158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.413 [2024-11-19 10:16:45.946835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.413 [2024-11-19 10:16:45.946867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.413 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.673 [2024-11-19 10:16:45.963561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.673 [2024-11-19 10:16:45.963595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.673 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.673 [2024-11-19 10:16:45.980506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.673 [2024-11-19 10:16:45.980539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.673 2024/11/19 10:16:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.673 [2024-11-19 10:16:45.997271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.673 [2024-11-19 10:16:45.997307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.673 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.673 [2024-11-19 10:16:46.009077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.673 [2024-11-19 10:16:46.009110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.673 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.673 [2024-11-19 10:16:46.018393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.673 [2024-11-19 10:16:46.018426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.673 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.673 [2024-11-19 10:16:46.031367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.673 [2024-11-19 10:16:46.031400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.673 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.673 [2024-11-19 10:16:46.046867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.673 [2024-11-19 10:16:46.046902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.673 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.673 [2024-11-19 10:16:46.056003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.673 [2024-11-19 10:16:46.056036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.673 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.673 [2024-11-19 10:16:46.071260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.673 [2024-11-19 10:16:46.071297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.673 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.673 [2024-11-19 10:16:46.088350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.673 [2024-11-19 10:16:46.088392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.673 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.673 [2024-11-19 10:16:46.103044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.673 [2024-11-19 10:16:46.103078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.673 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.673 [2024-11-19 10:16:46.117955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.673 [2024-11-19 10:16:46.117991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.673 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.673 [2024-11-19 10:16:46.128019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.673 [2024-11-19 10:16:46.128053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.673 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.673 [2024-11-19 10:16:46.138873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.674 [2024-11-19 10:16:46.138905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.674 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.674 [2024-11-19 10:16:46.155589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.674 [2024-11-19 10:16:46.155625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.674 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.674 [2024-11-19 10:16:46.165852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.674 [2024-11-19 10:16:46.165885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.674 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.674 [2024-11-19 10:16:46.180651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.674 [2024-11-19 10:16:46.180686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.674 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.674 [2024-11-19 10:16:46.191045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.674 [2024-11-19 10:16:46.191076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.674 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.674 [2024-11-19 10:16:46.205631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.674 [2024-11-19 10:16:46.205667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.674 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.223962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.224001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.238189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.238224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.253164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.253197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.268633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.268669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.277360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.277395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.293131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.293168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.303093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.303126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.313864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.313897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.326661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.326695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.336195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.336227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.353710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.353745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.369923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.369957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.388556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.388592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.398925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.398966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.409647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.409682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.421867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.421900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.431572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.431604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.442807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.442852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.460004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.460036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:26.933 [2024-11-19 10:16:46.470262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:26.933 [2024-11-19 10:16:46.470295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:26.933 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.192 [2024-11-19 10:16:46.481659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.192 [2024-11-19 10:16:46.481709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.192 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.192 [2024-11-19 10:16:46.498153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.193 [2024-11-19 10:16:46.498200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.193 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.193 [2024-11-19 10:16:46.514432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.193 [2024-11-19 10:16:46.514466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.193 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.193 [2024-11-19 10:16:46.530673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.193 [2024-11-19 10:16:46.530707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.193 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.193 [2024-11-19 10:16:46.547773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.193 [2024-11-19 10:16:46.547811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.193 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.193 [2024-11-19 10:16:46.564155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.193 [2024-11-19 10:16:46.564196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.193 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.193 [2024-11-19 10:16:46.581875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.193 [2024-11-19 10:16:46.581911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.193 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.193 [2024-11-19 10:16:46.597482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.193 [2024-11-19 10:16:46.597519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.193 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.193 [2024-11-19 10:16:46.614127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.193 [2024-11-19 10:16:46.614164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.193 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.193 [2024-11-19 10:16:46.629653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.193 [2024-11-19 10:16:46.629702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.193 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.193 [2024-11-19 10:16:46.639311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.193 [2024-11-19 10:16:46.639351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.193 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.193 [2024-11-19 10:16:46.652928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.193 [2024-11-19 10:16:46.652961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.193 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.193 [2024-11-19 10:16:46.669383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.193 [2024-11-19 10:16:46.669435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.193 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.193 [2024-11-19 10:16:46.680431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.193 [2024-11-19 10:16:46.680467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.193 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.193 [2024-11-19 10:16:46.691200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.193 [2024-11-19 10:16:46.691235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.193 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.193 [2024-11-19 10:16:46.708678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.193 [2024-11-19 10:16:46.708714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.193 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.193 [2024-11-19 10:16:46.725561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.193 [2024-11-19 10:16:46.725596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.193 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.452 [2024-11-19 10:16:46.742512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.452 [2024-11-19 10:16:46.742546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.452 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.452 [2024-11-19 10:16:46.759769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.452 [2024-11-19 10:16:46.759803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.452 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.452 [2024-11-19 10:16:46.774299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.452 [2024-11-19 10:16:46.774332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.452 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.452 [2024-11-19 10:16:46.788981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.452 [2024-11-19 10:16:46.789014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.452 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.452 [2024-11-19 10:16:46.803405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.452 [2024-11-19 10:16:46.803441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.452 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.452 [2024-11-19 10:16:46.819990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.452 [2024-11-19 10:16:46.820024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.453 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.453 [2024-11-19 10:16:46.836068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.453 [2024-11-19 10:16:46.836107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.453 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.453 [2024-11-19 10:16:46.845525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.453 [2024-11-19 10:16:46.845559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.453 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.453 [2024-11-19 10:16:46.856739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.453 [2024-11-19 10:16:46.856773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.453 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.453 [2024-11-19 10:16:46.869383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.453 [2024-11-19 10:16:46.869418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.453 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.453 [2024-11-19 10:16:46.879491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.453 [2024-11-19 10:16:46.879525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.453 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.453 [2024-11-19 10:16:46.894237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.453 [2024-11-19 10:16:46.894272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.453 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.453 [2024-11-19 10:16:46.903837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.453 [2024-11-19 10:16:46.903868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.453 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.453 [2024-11-19 10:16:46.918930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.453 [2024-11-19 10:16:46.918975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.453 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.453 [2024-11-19 10:16:46.928770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.453 [2024-11-19 10:16:46.928804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.453 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.453 [2024-11-19 10:16:46.944287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.453 [2024-11-19 10:16:46.944321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.453 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.453 [2024-11-19 10:16:46.955941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.453 [2024-11-19 10:16:46.955974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.453 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.453 [2024-11-19 10:16:46.964741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.453 [2024-11-19 10:16:46.964775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.453 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.453 [2024-11-19 10:16:46.976273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.453 [2024-11-19 10:16:46.976307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.453 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.453 [2024-11-19 10:16:46.988737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.453 [2024-11-19 10:16:46.988770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.453 2024/11/19 10:16:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.712 [2024-11-19 10:16:47.006857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.712 [2024-11-19 10:16:47.006895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.712 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.712 [2024-11-19 10:16:47.021498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.712 [2024-11-19 10:16:47.021532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.713 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.713 [2024-11-19 10:16:47.037721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.713 [2024-11-19 10:16:47.037754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.713 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.713 [2024-11-19 10:16:47.053937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.713 [2024-11-19 10:16:47.053971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.713 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.713 [2024-11-19 10:16:47.071100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.713 [2024-11-19 10:16:47.071135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.713 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.713 [2024-11-19 10:16:47.081699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.713 [2024-11-19 10:16:47.081733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.713 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.713 [2024-11-19 10:16:47.095783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.713 [2024-11-19 10:16:47.095834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.713 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.713 [2024-11-19 10:16:47.106131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.713 [2024-11-19 10:16:47.106164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.713 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.713 [2024-11-19 10:16:47.120783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.713 [2024-11-19 10:16:47.120829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.713 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.713 [2024-11-19 10:16:47.131109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.713 [2024-11-19 10:16:47.131141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.713 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.713 [2024-11-19 10:16:47.145915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.713 [2024-11-19 10:16:47.145949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.713 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.713 [2024-11-19 10:16:47.164386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.713 [2024-11-19 10:16:47.164419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.713 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.713 [2024-11-19 10:16:47.179145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.713 [2024-11-19 10:16:47.179177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.713 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.713 [2024-11-19 10:16:47.191240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.713 [2024-11-19 10:16:47.191272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.713 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.713 [2024-11-19 10:16:47.208712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.713 [2024-11-19 10:16:47.208746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.713 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.713 [2024-11-19 10:16:47.223350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.713 [2024-11-19 10:16:47.223382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.713 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.713 [2024-11-19 10:16:47.244701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.713 [2024-11-19 10:16:47.244741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.713 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.972 [2024-11-19 10:16:47.264899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.972 [2024-11-19 10:16:47.264938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.972 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.973 [2024-11-19 10:16:47.283991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.973 [2024-11-19 10:16:47.284045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.973 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.973 [2024-11-19 10:16:47.300069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.973 [2024-11-19 10:16:47.300147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.973 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.973 [2024-11-19 10:16:47.319770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.973 [2024-11-19 10:16:47.319887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.973 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.973 [2024-11-19 10:16:47.334636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.973 [2024-11-19 10:16:47.334716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.973 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.973 [2024-11-19 10:16:47.352340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.973 [2024-11-19 10:16:47.352435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.973 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.973 [2024-11-19 10:16:47.371854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.973 [2024-11-19 10:16:47.371937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.973 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.973 [2024-11-19 10:16:47.389618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.973 [2024-11-19 10:16:47.389700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.973 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.973 [2024-11-19 10:16:47.406891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.973 [2024-11-19 10:16:47.406991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.973 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.973 [2024-11-19 10:16:47.423435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.973 [2024-11-19 10:16:47.423512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.973 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.973 [2024-11-19 10:16:47.438323] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.973 [2024-11-19 10:16:47.438414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.973 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.973 [2024-11-19 10:16:47.455915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.973 [2024-11-19 10:16:47.455988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.973 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.973 [2024-11-19 10:16:47.474539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.973 [2024-11-19 10:16:47.474609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.973 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.973 [2024-11-19 10:16:47.492045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.973 [2024-11-19 10:16:47.492110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.973 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:27.973 [2024-11-19 10:16:47.509244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:27.973 [2024-11-19 10:16:47.509311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:27.973 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.232 [2024-11-19 10:16:47.527844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.232 [2024-11-19 10:16:47.527902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.232 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.232 [2024-11-19 10:16:47.545668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.232 [2024-11-19 10:16:47.545738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.232 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.232 [2024-11-19 10:16:47.561334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.232 [2024-11-19 10:16:47.561403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.232 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.232 [2024-11-19 10:16:47.578653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.232 [2024-11-19 10:16:47.578726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.232 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.232 [2024-11-19 10:16:47.596321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.232 [2024-11-19 10:16:47.596388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.232 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.232 [2024-11-19 10:16:47.614964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.232 [2024-11-19 10:16:47.615115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.232 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.232 [2024-11-19 10:16:47.632427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.232 [2024-11-19 10:16:47.632490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.232 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.232 [2024-11-19 10:16:47.649799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.232 [2024-11-19 10:16:47.649875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.232 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.232 [2024-11-19 10:16:47.668162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.232 [2024-11-19 10:16:47.668224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.232 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.232 [2024-11-19 10:16:47.685169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.232 [2024-11-19 10:16:47.685232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.232 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.232 [2024-11-19 10:16:47.702580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.232 [2024-11-19 10:16:47.702636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.232 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.232 [2024-11-19 10:16:47.719712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.233 [2024-11-19 10:16:47.719779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.233 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.233 [2024-11-19 10:16:47.737280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.233 [2024-11-19 10:16:47.737358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.233 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.233 [2024-11-19 10:16:47.755956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.233 [2024-11-19 10:16:47.756020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.233 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.233 [2024-11-19 10:16:47.773036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.233 [2024-11-19 10:16:47.773104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.233 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.492 [2024-11-19 10:16:47.790472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.492 [2024-11-19 10:16:47.790531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.492 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.492 [2024-11-19 10:16:47.806296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.492 [2024-11-19 10:16:47.806359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.492 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.492 [2024-11-19 10:16:47.823915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.492 [2024-11-19 10:16:47.823977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.492 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.492 [2024-11-19 10:16:47.838863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.492 [2024-11-19 10:16:47.838922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.492 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.492 [2024-11-19 10:16:47.850392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.492 [2024-11-19 10:16:47.850449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.492 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.492 [2024-11-19 10:16:47.866859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.492 [2024-11-19 10:16:47.866927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.492 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.492 [2024-11-19 10:16:47.882548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.492 [2024-11-19 10:16:47.882620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.492 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.492 [2024-11-19 10:16:47.899581] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.492 [2024-11-19 10:16:47.899649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.492 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.492 [2024-11-19 10:16:47.915807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.492 [2024-11-19 10:16:47.915881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.492 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.492 [2024-11-19 10:16:47.932806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.492 [2024-11-19 10:16:47.932879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.492 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.492 [2024-11-19 10:16:47.947992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.492 [2024-11-19 10:16:47.948052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.492 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.492 [2024-11-19 10:16:47.957380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.492 [2024-11-19 10:16:47.957434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.492 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.492 [2024-11-19 10:16:47.973018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.492 [2024-11-19 10:16:47.973080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.492 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.492 [2024-11-19 10:16:47.990322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.492 [2024-11-19 10:16:47.990393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.492 2024/11/19 10:16:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.492 [2024-11-19 10:16:48.006932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.492 [2024-11-19 10:16:48.007003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.492 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.492 [2024-11-19 10:16:48.022394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.492 [2024-11-19 10:16:48.022447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.493 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.751 [2024-11-19 10:16:48.038741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.751 [2024-11-19 10:16:48.038970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.751 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.751 [2024-11-19 10:16:48.054796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.054993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.071480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.071523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.088699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.088741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.104104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.104146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.114813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.114865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.130534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.130692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.141137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.141286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.151758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.151796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.162087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.162123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.176840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.176878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.186640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.186676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.201385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.201422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.212985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.213020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.230336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.230371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.244945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.244982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.261507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.261545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.277997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.278035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:28.752 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:28.752 [2024-11-19 10:16:48.296098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:28.752 [2024-11-19 10:16:48.296138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.011 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.011 [2024-11-19 10:16:48.310406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.011 [2024-11-19 10:16:48.310444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.011 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.011 [2024-11-19 10:16:48.320306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.011 [2024-11-19 10:16:48.320345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.011 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.011 [2024-11-19 10:16:48.334613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.011 [2024-11-19 10:16:48.334658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.011 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.011 [2024-11-19 10:16:48.345181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.011 [2024-11-19 10:16:48.345222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.011 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.011 [2024-11-19 10:16:48.359799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.011 [2024-11-19 10:16:48.359864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.011 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.011 [2024-11-19 10:16:48.371889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.011 [2024-11-19 10:16:48.371929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.011 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.011 [2024-11-19 10:16:48.389065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.011 [2024-11-19 10:16:48.389106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.011 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.011 [2024-11-19 10:16:48.399580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.011 [2024-11-19 10:16:48.399616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.011 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.012 [2024-11-19 10:16:48.412488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.012 [2024-11-19 10:16:48.412525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.012 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.012 [2024-11-19 10:16:48.424125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.012 [2024-11-19 10:16:48.424162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.012 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.012 [2024-11-19 10:16:48.441108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.012 [2024-11-19 10:16:48.441146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.012 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.012 [2024-11-19 10:16:48.451324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.012 [2024-11-19 10:16:48.451362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.012 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.012 [2024-11-19 10:16:48.466397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.012 [2024-11-19 10:16:48.466435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.012 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.012 [2024-11-19 10:16:48.482851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.012 [2024-11-19 10:16:48.482898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.012 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.012 [2024-11-19 10:16:48.500151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.012 [2024-11-19 10:16:48.500190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.012 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.012 [2024-11-19 10:16:48.515028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.012 [2024-11-19 10:16:48.515066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.012 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.012 [2024-11-19 10:16:48.531498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.012 [2024-11-19 10:16:48.531540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.012 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.012 [2024-11-19 10:16:48.543101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.012 [2024-11-19 10:16:48.543141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.012 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.271 [2024-11-19 10:16:48.558010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.271 [2024-11-19 10:16:48.558050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.271 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.271 [2024-11-19 10:16:48.574368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.271 [2024-11-19 10:16:48.574419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.271 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.271 [2024-11-19 10:16:48.592338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.271 [2024-11-19 10:16:48.592384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.271 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.271 [2024-11-19 10:16:48.607916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.271 [2024-11-19 10:16:48.607972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.271 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.271 [2024-11-19 10:16:48.619529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.271 [2024-11-19 10:16:48.619570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.271 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.271 [2024-11-19 10:16:48.636587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.271 [2024-11-19 10:16:48.636630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.271 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.271 [2024-11-19 10:16:48.652442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.271 [2024-11-19 10:16:48.652482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.271 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.271 [2024-11-19 10:16:48.668496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.271 [2024-11-19 10:16:48.668548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.271 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.271 [2024-11-19 10:16:48.678433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.271 [2024-11-19 10:16:48.678476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.271 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.271 [2024-11-19 10:16:48.692681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.271 [2024-11-19 10:16:48.692721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.271 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.271 [2024-11-19 10:16:48.705064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.271 [2024-11-19 10:16:48.705103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.271 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.271 [2024-11-19 10:16:48.714498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.271 [2024-11-19 10:16:48.714538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.271 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.271 [2024-11-19 10:16:48.729787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.271 [2024-11-19 10:16:48.729844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.271 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.271 [2024-11-19 10:16:48.747144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.271 [2024-11-19 10:16:48.747184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.271 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.271 [2024-11-19 10:16:48.763184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.271 [2024-11-19 10:16:48.763229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.272 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.272 [2024-11-19 10:16:48.779480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.272 [2024-11-19 10:16:48.779527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.272 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.272 [2024-11-19 10:16:48.797066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.272 [2024-11-19 10:16:48.797110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.272 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.272 [2024-11-19 10:16:48.811680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.272 [2024-11-19 10:16:48.811724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.272 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.530 [2024-11-19 10:16:48.827805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.530 [2024-11-19 10:16:48.827861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.530 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.530 [2024-11-19 10:16:48.844105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.530 [2024-11-19 10:16:48.844149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.530 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.530 [2024-11-19 10:16:48.861018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.530 [2024-11-19 10:16:48.861062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.530 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.530 [2024-11-19 10:16:48.877200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.530 [2024-11-19 10:16:48.877240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.530 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.530 [2024-11-19 10:16:48.894224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.530 [2024-11-19 10:16:48.894264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.530 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.530 [2024-11-19 10:16:48.910292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.530 [2024-11-19 10:16:48.910336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.530 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.530 [2024-11-19 10:16:48.927526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.530 [2024-11-19 10:16:48.927566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.530 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.530 [2024-11-19 10:16:48.944358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.530 [2024-11-19 10:16:48.944396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.530 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.530 [2024-11-19 10:16:48.961303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.530 [2024-11-19 10:16:48.961342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.530 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.530 [2024-11-19 10:16:48.977679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.530 [2024-11-19 10:16:48.977723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.530 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.530 [2024-11-19 10:16:48.994441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.530 [2024-11-19 10:16:48.994504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.530 2024/11/19 10:16:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.530 [2024-11-19 10:16:49.010647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.530 [2024-11-19 10:16:49.010700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.530 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.530 [2024-11-19 10:16:49.026868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.530 [2024-11-19 10:16:49.026931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.530 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.530 [2024-11-19 10:16:49.036897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.530 [2024-11-19 10:16:49.036950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.530 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.530 [2024-11-19 10:16:49.051792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.530 [2024-11-19 10:16:49.051856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.530 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.530 [2024-11-19 10:16:49.068692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.530 [2024-11-19 10:16:49.068745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.530 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.788 [2024-11-19 10:16:49.078992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.788 [2024-11-19 10:16:49.079044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.788 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.788 [2024-11-19 10:16:49.093582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.788 [2024-11-19 10:16:49.093628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.788 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.788 [2024-11-19 10:16:49.106136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.788 [2024-11-19 10:16:49.106198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.788 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.788 [2024-11-19 10:16:49.122267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.788 [2024-11-19 10:16:49.122330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.788 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.788 [2024-11-19 10:16:49.138616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.788 [2024-11-19 10:16:49.138682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.788 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.788 [2024-11-19 10:16:49.154776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.788 [2024-11-19 10:16:49.154842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.788 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.788 [2024-11-19 10:16:49.173185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.788 [2024-11-19 10:16:49.173227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.788 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.788 [2024-11-19 10:16:49.187887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.788 [2024-11-19 10:16:49.187942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.788 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.789 [2024-11-19 10:16:49.198567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.789 [2024-11-19 10:16:49.198609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.789 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.789 [2024-11-19 10:16:49.213157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.789 [2024-11-19 10:16:49.213198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.789 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.789 [2024-11-19 10:16:49.230155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.789 [2024-11-19 10:16:49.230195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.789 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.789 [2024-11-19 10:16:49.246112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.789 [2024-11-19 10:16:49.246150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.789 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.789 [2024-11-19 10:16:49.256011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.789 [2024-11-19 10:16:49.256047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.789 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.789 [2024-11-19 10:16:49.270610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.789 [2024-11-19 10:16:49.270648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.789 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.789 [2024-11-19 10:16:49.280418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.789 [2024-11-19 10:16:49.280456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.789 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.789 [2024-11-19 10:16:49.295054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.789 [2024-11-19 10:16:49.295100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.789 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.789 [2024-11-19 10:16:49.312159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.789 [2024-11-19 10:16:49.312198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.789 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:29.789 [2024-11-19 10:16:49.328314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:29.789 [2024-11-19 10:16:49.328364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:29.789 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.047 [2024-11-19 10:16:49.337986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.047 [2024-11-19 10:16:49.338020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.047 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.047 [2024-11-19 10:16:49.352883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.047 [2024-11-19 10:16:49.352919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.047 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.047 [2024-11-19 10:16:49.362087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.047 [2024-11-19 10:16:49.362124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.047 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.047 [2024-11-19 10:16:49.376572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.047 [2024-11-19 10:16:49.376612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.047 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.047 [2024-11-19 10:16:49.393925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.047 [2024-11-19 10:16:49.393964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.048 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.048 [2024-11-19 10:16:49.410161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.048 [2024-11-19 10:16:49.410200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.048 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.048 [2024-11-19 10:16:49.426966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.048 [2024-11-19 10:16:49.427003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.048 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.048 [2024-11-19 10:16:49.443682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.048 [2024-11-19 10:16:49.443719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.048 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.048 [2024-11-19 10:16:49.453909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.048 [2024-11-19 10:16:49.453944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.048 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.048 [2024-11-19 10:16:49.468063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.048 [2024-11-19 10:16:49.468101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.048 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.048 [2024-11-19 10:16:49.484456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.048 [2024-11-19 10:16:49.484500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.048 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.048 [2024-11-19 10:16:49.501603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.048 [2024-11-19 10:16:49.501644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.048 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.048 [2024-11-19 10:16:49.516598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.048 [2024-11-19 10:16:49.516637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.048 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.048 [2024-11-19 10:16:49.525520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.048 [2024-11-19 10:16:49.525559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.048 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.048 [2024-11-19 10:16:49.540992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.048 [2024-11-19 10:16:49.541031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.048 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.048 [2024-11-19 10:16:49.550758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.048 [2024-11-19 10:16:49.550796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.048 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.048 [2024-11-19 10:16:49.561084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.048 [2024-11-19 10:16:49.561122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.048 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.048 [2024-11-19 10:16:49.571395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.048 [2024-11-19 10:16:49.571434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.048 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.048 [2024-11-19 10:16:49.586083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.048 [2024-11-19 10:16:49.586125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.048 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.595839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.595874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.307 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.606361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.606410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.307 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.617216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.617254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.307 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.633984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.634022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.307 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.652094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.652132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.307 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.667065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.667108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.307 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.683493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.683538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.307 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.698386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.698426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.307 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.708495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.708533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.307 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.722420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.722458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.307 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.738780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.738831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.307 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.755221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.755259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.307 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.772524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.772562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.307 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.782733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.782771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.307 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.797015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.797053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.307 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.807261] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.807302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.307 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.307 [2024-11-19 10:16:49.821736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.307 [2024-11-19 10:16:49.821783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.308 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.308 [2024-11-19 10:16:49.831447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.308 [2024-11-19 10:16:49.831489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.308 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.308 [2024-11-19 10:16:49.846675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.308 [2024-11-19 10:16:49.846719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.308 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:49.858257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:49.858295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:49.875419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:49.875457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:49.889922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:49.889964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:49.906606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:49.906654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:49.916801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:49.916851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:49.932023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:49.932066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:49.941977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:49.942015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:49.957220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:49.957266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:49.973674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:49.973715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:49.991204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:49.991248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:49 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:50.001891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:50.001929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:50.015952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:50.016003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:50.025897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:50.025937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:50.041567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:50.041608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:50.056711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:50.056898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:50.066419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:50.066573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:50.077079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:50.077228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:50.094770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:50.094951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.586 [2024-11-19 10:16:50.105090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.586 [2024-11-19 10:16:50.105128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.586 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 [2024-11-19 10:16:50.119782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-11-19 10:16:50.119836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 [2024-11-19 10:16:50.129094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-11-19 10:16:50.129251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 [2024-11-19 10:16:50.140994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-11-19 10:16:50.141156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 00:16:30.868 Latency(us) 00:16:30.868 [2024-11-19T10:16:50.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.868 [2024-11-19T10:16:50.414Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:30.868 Nvme1n1 : 5.01 11587.88 90.53 0.00 0.00 11031.86 4349.21 26571.87 00:16:30.868 [2024-11-19T10:16:50.414Z] =================================================================================================================== 00:16:30.868 [2024-11-19T10:16:50.414Z] Total : 11587.88 90.53 0.00 0.00 11031.86 4349.21 26571.87 00:16:30.868 [2024-11-19 10:16:50.152371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-11-19 10:16:50.152531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 [2024-11-19 10:16:50.164370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-11-19 10:16:50.164411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 [2024-11-19 10:16:50.176405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-11-19 10:16:50.176462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 [2024-11-19 10:16:50.188400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-11-19 10:16:50.188457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 [2024-11-19 10:16:50.200410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-11-19 10:16:50.200461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 [2024-11-19 10:16:50.212402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-11-19 10:16:50.212450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 [2024-11-19 10:16:50.224398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-11-19 10:16:50.224441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 [2024-11-19 10:16:50.236457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-11-19 10:16:50.236519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 [2024-11-19 10:16:50.248427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-11-19 10:16:50.248479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 [2024-11-19 10:16:50.260437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-11-19 10:16:50.260490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 [2024-11-19 10:16:50.272415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-11-19 10:16:50.272458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 [2024-11-19 10:16:50.280385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-11-19 10:16:50.280418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 [2024-11-19 10:16:50.288386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:30.868 [2024-11-19 10:16:50.288421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.868 2024/11/19 10:16:50 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:30.868 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (85957) - No such process 00:16:30.868 10:16:50 -- target/zcopy.sh@49 -- # wait 85957 00:16:30.868 10:16:50 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.868 10:16:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.868 10:16:50 -- common/autotest_common.sh@10 -- # set +x 00:16:30.868 10:16:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.868 10:16:50 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:30.868 10:16:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.868 10:16:50 -- common/autotest_common.sh@10 -- # set +x 00:16:30.868 delay0 00:16:30.868 10:16:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.868 10:16:50 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:30.868 10:16:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.868 10:16:50 -- common/autotest_common.sh@10 -- # set +x 00:16:30.869 10:16:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.869 10:16:50 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:31.127 [2024-11-19 10:16:50.485422] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:37.689 Initializing NVMe Controllers 00:16:37.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:37.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:37.689 Initialization complete. Launching workers. 00:16:37.689 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1178 00:16:37.689 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1465, failed to submit 33 00:16:37.689 success 1278, unsuccess 187, failed 0 00:16:37.689 10:16:56 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:37.689 10:16:56 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:37.689 10:16:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:37.689 10:16:56 -- nvmf/common.sh@116 -- # sync 00:16:37.689 10:16:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:37.689 10:16:56 -- nvmf/common.sh@119 -- # set +e 00:16:37.689 10:16:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:37.689 10:16:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:37.689 rmmod nvme_tcp 00:16:37.689 rmmod nvme_fabrics 00:16:37.689 rmmod nvme_keyring 00:16:37.689 10:16:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:37.689 10:16:56 -- nvmf/common.sh@123 -- # set -e 00:16:37.689 10:16:56 -- nvmf/common.sh@124 -- # return 0 00:16:37.689 10:16:56 -- nvmf/common.sh@477 -- # '[' -n 85802 ']' 00:16:37.689 10:16:56 -- nvmf/common.sh@478 -- # killprocess 85802 00:16:37.689 10:16:56 -- common/autotest_common.sh@936 -- # '[' -z 85802 ']' 00:16:37.689 10:16:56 -- common/autotest_common.sh@940 -- # kill -0 85802 00:16:37.689 10:16:56 -- common/autotest_common.sh@941 -- # uname 00:16:37.689 10:16:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:37.689 10:16:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85802 00:16:37.690 killing process with pid 85802 00:16:37.690 10:16:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:37.690 10:16:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:37.690 10:16:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85802' 00:16:37.690 10:16:56 -- common/autotest_common.sh@955 -- # kill 85802 00:16:37.690 10:16:56 -- common/autotest_common.sh@960 -- # wait 85802 00:16:37.690 10:16:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:37.690 10:16:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:37.690 10:16:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:37.690 10:16:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:37.690 10:16:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:37.690 10:16:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.690 10:16:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.690 10:16:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.690 10:16:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:37.690 ************************************ 00:16:37.690 END TEST nvmf_zcopy 00:16:37.690 ************************************ 00:16:37.690 00:16:37.690 real 0m23.750s 00:16:37.690 user 0m39.024s 00:16:37.690 sys 0m6.155s 00:16:37.690 10:16:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:37.690 10:16:57 -- common/autotest_common.sh@10 -- # set +x 00:16:37.690 10:16:57 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:37.690 10:16:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:37.690 10:16:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:37.690 10:16:57 -- common/autotest_common.sh@10 -- # set +x 00:16:37.690 ************************************ 00:16:37.690 START TEST nvmf_nmic 00:16:37.690 ************************************ 00:16:37.690 10:16:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:37.690 * Looking for test storage... 00:16:37.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:37.690 10:16:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:37.690 10:16:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:37.690 10:16:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:37.949 10:16:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:37.949 10:16:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:37.949 10:16:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:37.949 10:16:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:37.949 10:16:57 -- scripts/common.sh@335 -- # IFS=.-: 00:16:37.949 10:16:57 -- scripts/common.sh@335 -- # read -ra ver1 00:16:37.949 10:16:57 -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.949 10:16:57 -- scripts/common.sh@336 -- # read -ra ver2 00:16:37.949 10:16:57 -- scripts/common.sh@337 -- # local 'op=<' 00:16:37.949 10:16:57 -- scripts/common.sh@339 -- # ver1_l=2 00:16:37.949 10:16:57 -- scripts/common.sh@340 -- # ver2_l=1 00:16:37.949 10:16:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:37.949 10:16:57 -- scripts/common.sh@343 -- # case "$op" in 00:16:37.949 10:16:57 -- scripts/common.sh@344 -- # : 1 00:16:37.949 10:16:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:37.949 10:16:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.949 10:16:57 -- scripts/common.sh@364 -- # decimal 1 00:16:37.949 10:16:57 -- scripts/common.sh@352 -- # local d=1 00:16:37.949 10:16:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.949 10:16:57 -- scripts/common.sh@354 -- # echo 1 00:16:37.949 10:16:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:37.949 10:16:57 -- scripts/common.sh@365 -- # decimal 2 00:16:37.949 10:16:57 -- scripts/common.sh@352 -- # local d=2 00:16:37.949 10:16:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.949 10:16:57 -- scripts/common.sh@354 -- # echo 2 00:16:37.949 10:16:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:37.949 10:16:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:37.949 10:16:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:37.949 10:16:57 -- scripts/common.sh@367 -- # return 0 00:16:37.949 10:16:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.949 10:16:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:37.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.949 --rc genhtml_branch_coverage=1 00:16:37.949 --rc genhtml_function_coverage=1 00:16:37.949 --rc genhtml_legend=1 00:16:37.949 --rc geninfo_all_blocks=1 00:16:37.949 --rc geninfo_unexecuted_blocks=1 00:16:37.949 00:16:37.949 ' 00:16:37.949 10:16:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:37.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.949 --rc genhtml_branch_coverage=1 00:16:37.949 --rc genhtml_function_coverage=1 00:16:37.949 --rc genhtml_legend=1 00:16:37.949 --rc geninfo_all_blocks=1 00:16:37.949 --rc geninfo_unexecuted_blocks=1 00:16:37.949 00:16:37.949 ' 00:16:37.949 10:16:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:37.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.949 --rc genhtml_branch_coverage=1 00:16:37.949 --rc genhtml_function_coverage=1 00:16:37.949 --rc genhtml_legend=1 00:16:37.949 --rc geninfo_all_blocks=1 00:16:37.949 --rc geninfo_unexecuted_blocks=1 00:16:37.949 00:16:37.949 ' 00:16:37.949 10:16:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:37.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.949 --rc genhtml_branch_coverage=1 00:16:37.949 --rc genhtml_function_coverage=1 00:16:37.949 --rc genhtml_legend=1 00:16:37.949 --rc geninfo_all_blocks=1 00:16:37.949 --rc geninfo_unexecuted_blocks=1 00:16:37.949 00:16:37.949 ' 00:16:37.949 10:16:57 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:37.949 10:16:57 -- nvmf/common.sh@7 -- # uname -s 00:16:37.949 10:16:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.949 10:16:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.949 10:16:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.949 10:16:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.949 10:16:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.949 10:16:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.949 10:16:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.949 10:16:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.949 10:16:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.949 10:16:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.949 10:16:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:16:37.949 10:16:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:16:37.949 10:16:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.949 10:16:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.949 10:16:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:37.949 10:16:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:37.949 10:16:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.949 10:16:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.949 10:16:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.949 10:16:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.949 10:16:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.949 10:16:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.949 10:16:57 -- paths/export.sh@5 -- # export PATH 00:16:37.949 10:16:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.949 10:16:57 -- nvmf/common.sh@46 -- # : 0 00:16:37.949 10:16:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:37.949 10:16:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:37.949 10:16:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:37.949 10:16:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.949 10:16:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.949 10:16:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:37.949 10:16:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:37.949 10:16:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:37.949 10:16:57 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:37.949 10:16:57 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:37.949 10:16:57 -- target/nmic.sh@14 -- # nvmftestinit 00:16:37.949 10:16:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:37.949 10:16:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.949 10:16:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:37.949 10:16:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:37.949 10:16:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:37.949 10:16:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.949 10:16:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.949 10:16:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.949 10:16:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:37.949 10:16:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:37.949 10:16:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:37.949 10:16:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:37.949 10:16:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:37.949 10:16:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:37.949 10:16:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:37.949 10:16:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:37.949 10:16:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:37.949 10:16:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:37.949 10:16:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:37.950 10:16:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:37.950 10:16:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:37.950 10:16:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:37.950 10:16:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:37.950 10:16:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:37.950 10:16:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:37.950 10:16:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:37.950 10:16:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:37.950 10:16:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:37.950 Cannot find device "nvmf_tgt_br" 00:16:37.950 10:16:57 -- nvmf/common.sh@154 -- # true 00:16:37.950 10:16:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:37.950 Cannot find device "nvmf_tgt_br2" 00:16:37.950 10:16:57 -- nvmf/common.sh@155 -- # true 00:16:37.950 10:16:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:37.950 10:16:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:37.950 Cannot find device "nvmf_tgt_br" 00:16:37.950 10:16:57 -- nvmf/common.sh@157 -- # true 00:16:37.950 10:16:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:37.950 Cannot find device "nvmf_tgt_br2" 00:16:37.950 10:16:57 -- nvmf/common.sh@158 -- # true 00:16:37.950 10:16:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:37.950 10:16:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:37.950 10:16:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:37.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:37.950 10:16:57 -- nvmf/common.sh@161 -- # true 00:16:37.950 10:16:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:37.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:37.950 10:16:57 -- nvmf/common.sh@162 -- # true 00:16:37.950 10:16:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:37.950 10:16:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:37.950 10:16:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:37.950 10:16:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:37.950 10:16:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:38.208 10:16:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:38.208 10:16:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:38.208 10:16:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:38.208 10:16:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:38.208 10:16:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:38.208 10:16:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:38.208 10:16:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:38.208 10:16:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:38.208 10:16:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:38.208 10:16:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:38.208 10:16:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:38.209 10:16:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:38.209 10:16:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:38.209 10:16:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:38.209 10:16:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:38.209 10:16:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:38.209 10:16:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:38.209 10:16:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:38.209 10:16:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:38.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:38.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:16:38.209 00:16:38.209 --- 10.0.0.2 ping statistics --- 00:16:38.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.209 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:16:38.209 10:16:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:38.209 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:38.209 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:16:38.209 00:16:38.209 --- 10.0.0.3 ping statistics --- 00:16:38.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.209 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:38.209 10:16:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:38.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:38.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:38.209 00:16:38.209 --- 10.0.0.1 ping statistics --- 00:16:38.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:38.209 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:38.209 10:16:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:38.209 10:16:57 -- nvmf/common.sh@421 -- # return 0 00:16:38.209 10:16:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:38.209 10:16:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:38.209 10:16:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:38.209 10:16:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:38.209 10:16:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:38.209 10:16:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:38.209 10:16:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:38.209 10:16:57 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:38.209 10:16:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:38.209 10:16:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:38.209 10:16:57 -- common/autotest_common.sh@10 -- # set +x 00:16:38.209 10:16:57 -- nvmf/common.sh@469 -- # nvmfpid=86281 00:16:38.209 10:16:57 -- nvmf/common.sh@470 -- # waitforlisten 86281 00:16:38.209 10:16:57 -- common/autotest_common.sh@829 -- # '[' -z 86281 ']' 00:16:38.209 10:16:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:38.209 10:16:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:38.209 10:16:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.209 10:16:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:38.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:38.209 10:16:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.209 10:16:57 -- common/autotest_common.sh@10 -- # set +x 00:16:38.209 [2024-11-19 10:16:57.716560] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:38.209 [2024-11-19 10:16:57.716669] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.467 [2024-11-19 10:16:57.855729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:38.467 [2024-11-19 10:16:57.904502] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:38.467 [2024-11-19 10:16:57.904680] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.467 [2024-11-19 10:16:57.904695] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.467 [2024-11-19 10:16:57.904706] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.467 [2024-11-19 10:16:57.904869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.467 [2024-11-19 10:16:57.905321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.467 [2024-11-19 10:16:57.905584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:38.467 [2024-11-19 10:16:57.905592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.467 10:16:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.467 10:16:57 -- common/autotest_common.sh@862 -- # return 0 00:16:38.467 10:16:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:38.467 10:16:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:38.467 10:16:57 -- common/autotest_common.sh@10 -- # set +x 00:16:38.725 10:16:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.726 10:16:58 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:38.726 10:16:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.726 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:38.726 [2024-11-19 10:16:58.030377] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.726 10:16:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.726 10:16:58 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:38.726 10:16:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.726 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:38.726 Malloc0 00:16:38.726 10:16:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.726 10:16:58 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:38.726 10:16:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.726 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:38.726 10:16:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.726 10:16:58 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:38.726 10:16:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.726 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:38.726 10:16:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.726 10:16:58 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.726 10:16:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.726 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:38.726 [2024-11-19 10:16:58.088500] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.726 10:16:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.726 10:16:58 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:38.726 test case1: single bdev can't be used in multiple subsystems 00:16:38.726 10:16:58 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:38.726 10:16:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.726 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:38.726 10:16:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.726 10:16:58 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:38.726 10:16:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.726 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:38.726 10:16:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.726 10:16:58 -- target/nmic.sh@28 -- # nmic_status=0 00:16:38.726 10:16:58 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:38.726 10:16:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.726 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:38.726 [2024-11-19 10:16:58.116302] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:38.726 [2024-11-19 10:16:58.116435] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:38.726 [2024-11-19 10:16:58.116540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.726 2024/11/19 10:16:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:38.726 request: 00:16:38.726 { 00:16:38.726 "method": "nvmf_subsystem_add_ns", 00:16:38.726 "params": { 00:16:38.726 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:38.726 "namespace": { 00:16:38.726 "bdev_name": "Malloc0" 00:16:38.726 } 00:16:38.726 } 00:16:38.726 } 00:16:38.726 Got JSON-RPC error response 00:16:38.726 GoRPCClient: error on JSON-RPC call 00:16:38.726 10:16:58 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:38.726 10:16:58 -- target/nmic.sh@29 -- # nmic_status=1 00:16:38.726 10:16:58 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:38.726 Adding namespace failed - expected result. 00:16:38.726 10:16:58 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:38.726 test case2: host connect to nvmf target in multiple paths 00:16:38.726 10:16:58 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:38.726 10:16:58 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:38.726 10:16:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.726 10:16:58 -- common/autotest_common.sh@10 -- # set +x 00:16:38.726 [2024-11-19 10:16:58.128426] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:38.726 10:16:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.726 10:16:58 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:38.984 10:16:58 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:38.984 10:16:58 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:38.984 10:16:58 -- common/autotest_common.sh@1187 -- # local i=0 00:16:38.984 10:16:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.984 10:16:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:16:38.984 10:16:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:40.949 10:17:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:40.949 10:17:00 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:40.949 10:17:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:40.949 10:17:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:16:40.949 10:17:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:40.949 10:17:00 -- common/autotest_common.sh@1197 -- # return 0 00:16:40.949 10:17:00 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:41.208 [global] 00:16:41.208 thread=1 00:16:41.208 invalidate=1 00:16:41.208 rw=write 00:16:41.208 time_based=1 00:16:41.208 runtime=1 00:16:41.208 ioengine=libaio 00:16:41.208 direct=1 00:16:41.208 bs=4096 00:16:41.208 iodepth=1 00:16:41.208 norandommap=0 00:16:41.208 numjobs=1 00:16:41.208 00:16:41.208 verify_dump=1 00:16:41.208 verify_backlog=512 00:16:41.208 verify_state_save=0 00:16:41.208 do_verify=1 00:16:41.208 verify=crc32c-intel 00:16:41.208 [job0] 00:16:41.208 filename=/dev/nvme0n1 00:16:41.208 Could not set queue depth (nvme0n1) 00:16:41.208 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:41.208 fio-3.35 00:16:41.208 Starting 1 thread 00:16:42.585 00:16:42.585 job0: (groupid=0, jobs=1): err= 0: pid=86373: Tue Nov 19 10:17:01 2024 00:16:42.585 read: IOPS=3376, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec) 00:16:42.585 slat (nsec): min=12787, max=52638, avg=15655.26, stdev=3703.72 00:16:42.585 clat (usec): min=122, max=395, avg=139.40, stdev=11.66 00:16:42.585 lat (usec): min=136, max=409, avg=155.06, stdev=12.94 00:16:42.585 clat percentiles (usec): 00:16:42.585 | 1.00th=[ 126], 5.00th=[ 128], 10.00th=[ 130], 20.00th=[ 133], 00:16:42.586 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:16:42.586 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:16:42.586 | 99.00th=[ 176], 99.50th=[ 188], 99.90th=[ 241], 99.95th=[ 359], 00:16:42.586 | 99.99th=[ 396] 00:16:42.586 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:42.586 slat (usec): min=19, max=161, avg=26.07, stdev= 7.71 00:16:42.586 clat (usec): min=25, max=446, avg=103.26, stdev=12.28 00:16:42.586 lat (usec): min=108, max=467, avg=129.33, stdev=15.94 00:16:42.586 clat percentiles (usec): 00:16:42.586 | 1.00th=[ 91], 5.00th=[ 94], 10.00th=[ 95], 20.00th=[ 97], 00:16:42.586 | 30.00th=[ 98], 40.00th=[ 100], 50.00th=[ 101], 60.00th=[ 103], 00:16:42.586 | 70.00th=[ 105], 80.00th=[ 110], 90.00th=[ 115], 95.00th=[ 120], 00:16:42.586 | 99.00th=[ 135], 99.50th=[ 141], 99.90th=[ 206], 99.95th=[ 441], 00:16:42.586 | 99.99th=[ 445] 00:16:42.586 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:16:42.586 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:16:42.586 lat (usec) : 50=0.01%, 100=22.62%, 250=77.28%, 500=0.09% 00:16:42.586 cpu : usr=2.60%, sys=10.80%, ctx=6965, majf=0, minf=5 00:16:42.586 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:42.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.586 issued rwts: total=3380,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.586 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:42.586 00:16:42.586 Run status group 0 (all jobs): 00:16:42.586 READ: bw=13.2MiB/s (13.8MB/s), 13.2MiB/s-13.2MiB/s (13.8MB/s-13.8MB/s), io=13.2MiB (13.8MB), run=1001-1001msec 00:16:42.586 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:42.586 00:16:42.586 Disk stats (read/write): 00:16:42.586 nvme0n1: ios=3122/3193, merge=0/0, ticks=469/367, in_queue=836, util=91.48% 00:16:42.586 10:17:01 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:42.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:42.586 10:17:01 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:42.586 10:17:01 -- common/autotest_common.sh@1208 -- # local i=0 00:16:42.586 10:17:01 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:16:42.586 10:17:01 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.586 10:17:01 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:16:42.586 10:17:01 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.586 10:17:01 -- common/autotest_common.sh@1220 -- # return 0 00:16:42.586 10:17:01 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:42.586 10:17:01 -- target/nmic.sh@53 -- # nvmftestfini 00:16:42.586 10:17:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:42.586 10:17:01 -- nvmf/common.sh@116 -- # sync 00:16:42.586 10:17:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:42.586 10:17:01 -- nvmf/common.sh@119 -- # set +e 00:16:42.586 10:17:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:42.586 10:17:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:42.586 rmmod nvme_tcp 00:16:42.586 rmmod nvme_fabrics 00:16:42.586 rmmod nvme_keyring 00:16:42.586 10:17:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:42.586 10:17:01 -- nvmf/common.sh@123 -- # set -e 00:16:42.586 10:17:01 -- nvmf/common.sh@124 -- # return 0 00:16:42.586 10:17:01 -- nvmf/common.sh@477 -- # '[' -n 86281 ']' 00:16:42.586 10:17:01 -- nvmf/common.sh@478 -- # killprocess 86281 00:16:42.586 10:17:01 -- common/autotest_common.sh@936 -- # '[' -z 86281 ']' 00:16:42.586 10:17:01 -- common/autotest_common.sh@940 -- # kill -0 86281 00:16:42.586 10:17:01 -- common/autotest_common.sh@941 -- # uname 00:16:42.586 10:17:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.586 10:17:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86281 00:16:42.586 10:17:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:42.586 10:17:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:42.586 killing process with pid 86281 00:16:42.586 10:17:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86281' 00:16:42.586 10:17:01 -- common/autotest_common.sh@955 -- # kill 86281 00:16:42.586 10:17:01 -- common/autotest_common.sh@960 -- # wait 86281 00:16:42.845 10:17:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:42.845 10:17:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:42.845 10:17:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:42.845 10:17:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:42.845 10:17:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:42.845 10:17:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.845 10:17:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.845 10:17:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.845 10:17:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:42.845 ************************************ 00:16:42.845 END TEST nvmf_nmic 00:16:42.845 ************************************ 00:16:42.845 00:16:42.845 real 0m5.057s 00:16:42.845 user 0m16.497s 00:16:42.845 sys 0m1.242s 00:16:42.845 10:17:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:42.845 10:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:42.845 10:17:02 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:42.845 10:17:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:42.845 10:17:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.845 10:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:42.845 ************************************ 00:16:42.845 START TEST nvmf_fio_target 00:16:42.845 ************************************ 00:16:42.845 10:17:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:42.845 * Looking for test storage... 00:16:42.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:42.845 10:17:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:42.845 10:17:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:42.845 10:17:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:42.845 10:17:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:42.845 10:17:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:42.845 10:17:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:42.845 10:17:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:42.845 10:17:02 -- scripts/common.sh@335 -- # IFS=.-: 00:16:42.845 10:17:02 -- scripts/common.sh@335 -- # read -ra ver1 00:16:42.845 10:17:02 -- scripts/common.sh@336 -- # IFS=.-: 00:16:42.845 10:17:02 -- scripts/common.sh@336 -- # read -ra ver2 00:16:42.845 10:17:02 -- scripts/common.sh@337 -- # local 'op=<' 00:16:42.845 10:17:02 -- scripts/common.sh@339 -- # ver1_l=2 00:16:42.845 10:17:02 -- scripts/common.sh@340 -- # ver2_l=1 00:16:42.845 10:17:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:42.845 10:17:02 -- scripts/common.sh@343 -- # case "$op" in 00:16:42.845 10:17:02 -- scripts/common.sh@344 -- # : 1 00:16:42.845 10:17:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:42.845 10:17:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:42.845 10:17:02 -- scripts/common.sh@364 -- # decimal 1 00:16:42.845 10:17:02 -- scripts/common.sh@352 -- # local d=1 00:16:42.845 10:17:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:42.845 10:17:02 -- scripts/common.sh@354 -- # echo 1 00:16:42.845 10:17:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:42.845 10:17:02 -- scripts/common.sh@365 -- # decimal 2 00:16:42.845 10:17:02 -- scripts/common.sh@352 -- # local d=2 00:16:42.845 10:17:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:42.845 10:17:02 -- scripts/common.sh@354 -- # echo 2 00:16:42.845 10:17:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:42.845 10:17:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:42.845 10:17:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:42.845 10:17:02 -- scripts/common.sh@367 -- # return 0 00:16:42.845 10:17:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:42.845 10:17:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:42.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.845 --rc genhtml_branch_coverage=1 00:16:42.845 --rc genhtml_function_coverage=1 00:16:42.845 --rc genhtml_legend=1 00:16:42.845 --rc geninfo_all_blocks=1 00:16:42.845 --rc geninfo_unexecuted_blocks=1 00:16:42.845 00:16:42.845 ' 00:16:42.845 10:17:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:42.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.845 --rc genhtml_branch_coverage=1 00:16:42.845 --rc genhtml_function_coverage=1 00:16:42.845 --rc genhtml_legend=1 00:16:42.845 --rc geninfo_all_blocks=1 00:16:42.845 --rc geninfo_unexecuted_blocks=1 00:16:42.845 00:16:42.845 ' 00:16:42.845 10:17:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:42.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.845 --rc genhtml_branch_coverage=1 00:16:42.845 --rc genhtml_function_coverage=1 00:16:42.845 --rc genhtml_legend=1 00:16:42.845 --rc geninfo_all_blocks=1 00:16:42.845 --rc geninfo_unexecuted_blocks=1 00:16:42.845 00:16:42.845 ' 00:16:42.845 10:17:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:42.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:42.845 --rc genhtml_branch_coverage=1 00:16:42.845 --rc genhtml_function_coverage=1 00:16:42.845 --rc genhtml_legend=1 00:16:42.845 --rc geninfo_all_blocks=1 00:16:42.845 --rc geninfo_unexecuted_blocks=1 00:16:42.845 00:16:42.845 ' 00:16:42.845 10:17:02 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:42.845 10:17:02 -- nvmf/common.sh@7 -- # uname -s 00:16:42.845 10:17:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.846 10:17:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.846 10:17:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.846 10:17:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.846 10:17:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.846 10:17:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.846 10:17:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.846 10:17:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.846 10:17:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.846 10:17:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.846 10:17:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:16:42.846 10:17:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:16:42.846 10:17:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.846 10:17:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.846 10:17:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:42.846 10:17:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:42.846 10:17:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.846 10:17:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.846 10:17:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.846 10:17:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.846 10:17:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.846 10:17:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.846 10:17:02 -- paths/export.sh@5 -- # export PATH 00:16:42.846 10:17:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.846 10:17:02 -- nvmf/common.sh@46 -- # : 0 00:16:42.846 10:17:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:42.846 10:17:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:42.846 10:17:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:42.846 10:17:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.846 10:17:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.846 10:17:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:42.846 10:17:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:42.846 10:17:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:42.846 10:17:02 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:42.846 10:17:02 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:42.846 10:17:02 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:42.846 10:17:02 -- target/fio.sh@16 -- # nvmftestinit 00:16:42.846 10:17:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:42.846 10:17:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.104 10:17:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:43.104 10:17:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:43.104 10:17:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:43.104 10:17:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.104 10:17:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.104 10:17:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.104 10:17:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:43.104 10:17:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:43.104 10:17:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:43.104 10:17:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:43.104 10:17:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:43.104 10:17:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:43.104 10:17:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.104 10:17:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.104 10:17:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:43.104 10:17:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:43.104 10:17:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:43.104 10:17:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:43.104 10:17:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:43.104 10:17:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.104 10:17:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:43.104 10:17:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:43.104 10:17:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:43.104 10:17:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:43.104 10:17:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:43.104 10:17:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:43.104 Cannot find device "nvmf_tgt_br" 00:16:43.104 10:17:02 -- nvmf/common.sh@154 -- # true 00:16:43.104 10:17:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:43.104 Cannot find device "nvmf_tgt_br2" 00:16:43.104 10:17:02 -- nvmf/common.sh@155 -- # true 00:16:43.104 10:17:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:43.104 10:17:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:43.104 Cannot find device "nvmf_tgt_br" 00:16:43.104 10:17:02 -- nvmf/common.sh@157 -- # true 00:16:43.104 10:17:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:43.104 Cannot find device "nvmf_tgt_br2" 00:16:43.104 10:17:02 -- nvmf/common.sh@158 -- # true 00:16:43.104 10:17:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:43.104 10:17:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:43.104 10:17:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:43.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.104 10:17:02 -- nvmf/common.sh@161 -- # true 00:16:43.104 10:17:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:43.104 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.104 10:17:02 -- nvmf/common.sh@162 -- # true 00:16:43.104 10:17:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:43.104 10:17:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:43.104 10:17:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:43.104 10:17:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:43.104 10:17:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:43.105 10:17:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:43.105 10:17:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:43.105 10:17:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:43.105 10:17:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:43.105 10:17:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:43.105 10:17:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:43.105 10:17:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:43.105 10:17:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:43.105 10:17:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:43.105 10:17:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:43.105 10:17:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:43.105 10:17:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:43.363 10:17:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:43.363 10:17:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:43.363 10:17:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:43.363 10:17:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:43.363 10:17:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:43.363 10:17:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:43.363 10:17:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:43.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:16:43.363 00:16:43.363 --- 10.0.0.2 ping statistics --- 00:16:43.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.363 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:16:43.363 10:17:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:43.363 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:43.363 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:16:43.363 00:16:43.363 --- 10.0.0.3 ping statistics --- 00:16:43.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.363 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:43.363 10:17:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:43.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:16:43.363 00:16:43.363 --- 10.0.0.1 ping statistics --- 00:16:43.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.363 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:43.363 10:17:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.363 10:17:02 -- nvmf/common.sh@421 -- # return 0 00:16:43.363 10:17:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:43.363 10:17:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.363 10:17:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:43.363 10:17:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:43.363 10:17:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.363 10:17:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:43.363 10:17:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:43.363 10:17:02 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:43.363 10:17:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:43.363 10:17:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:43.363 10:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:43.363 10:17:02 -- nvmf/common.sh@469 -- # nvmfpid=86557 00:16:43.363 10:17:02 -- nvmf/common.sh@470 -- # waitforlisten 86557 00:16:43.363 10:17:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:43.363 10:17:02 -- common/autotest_common.sh@829 -- # '[' -z 86557 ']' 00:16:43.363 10:17:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.363 10:17:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.363 10:17:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.363 10:17:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.363 10:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:43.363 [2024-11-19 10:17:02.804570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:43.363 [2024-11-19 10:17:02.804690] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.621 [2024-11-19 10:17:02.940372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.621 [2024-11-19 10:17:02.978790] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:43.621 [2024-11-19 10:17:02.979206] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.621 [2024-11-19 10:17:02.979356] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.621 [2024-11-19 10:17:02.979552] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.621 [2024-11-19 10:17:02.979833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.621 [2024-11-19 10:17:02.979906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.621 [2024-11-19 10:17:02.979960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.621 [2024-11-19 10:17:02.979958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.557 10:17:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.557 10:17:03 -- common/autotest_common.sh@862 -- # return 0 00:16:44.557 10:17:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:44.557 10:17:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:44.557 10:17:03 -- common/autotest_common.sh@10 -- # set +x 00:16:44.557 10:17:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.557 10:17:03 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:44.815 [2024-11-19 10:17:04.221764] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.815 10:17:04 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:45.380 10:17:04 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:45.380 10:17:04 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:45.637 10:17:05 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:45.637 10:17:05 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:45.894 10:17:05 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:45.894 10:17:05 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:46.152 10:17:05 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:46.152 10:17:05 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:46.410 10:17:05 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:46.669 10:17:06 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:46.669 10:17:06 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.235 10:17:06 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:47.235 10:17:06 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:47.493 10:17:06 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:47.493 10:17:06 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:47.752 10:17:07 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:48.009 10:17:07 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:48.009 10:17:07 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:48.268 10:17:07 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:48.268 10:17:07 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:48.526 10:17:07 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.785 [2024-11-19 10:17:08.184042] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.785 10:17:08 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:49.043 10:17:08 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:49.302 10:17:08 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:49.561 10:17:08 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:49.561 10:17:08 -- common/autotest_common.sh@1187 -- # local i=0 00:16:49.561 10:17:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:16:49.561 10:17:08 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:16:49.561 10:17:08 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:16:49.561 10:17:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:16:51.466 10:17:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:16:51.466 10:17:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:16:51.466 10:17:10 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:16:51.466 10:17:10 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:16:51.466 10:17:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:16:51.466 10:17:10 -- common/autotest_common.sh@1197 -- # return 0 00:16:51.466 10:17:10 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:51.466 [global] 00:16:51.466 thread=1 00:16:51.466 invalidate=1 00:16:51.466 rw=write 00:16:51.466 time_based=1 00:16:51.466 runtime=1 00:16:51.466 ioengine=libaio 00:16:51.466 direct=1 00:16:51.466 bs=4096 00:16:51.466 iodepth=1 00:16:51.466 norandommap=0 00:16:51.466 numjobs=1 00:16:51.466 00:16:51.466 verify_dump=1 00:16:51.466 verify_backlog=512 00:16:51.466 verify_state_save=0 00:16:51.466 do_verify=1 00:16:51.466 verify=crc32c-intel 00:16:51.466 [job0] 00:16:51.466 filename=/dev/nvme0n1 00:16:51.466 [job1] 00:16:51.466 filename=/dev/nvme0n2 00:16:51.466 [job2] 00:16:51.466 filename=/dev/nvme0n3 00:16:51.466 [job3] 00:16:51.466 filename=/dev/nvme0n4 00:16:51.724 Could not set queue depth (nvme0n1) 00:16:51.724 Could not set queue depth (nvme0n2) 00:16:51.724 Could not set queue depth (nvme0n3) 00:16:51.724 Could not set queue depth (nvme0n4) 00:16:51.724 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:51.724 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:51.724 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:51.724 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:51.724 fio-3.35 00:16:51.724 Starting 4 threads 00:16:53.101 00:16:53.101 job0: (groupid=0, jobs=1): err= 0: pid=86864: Tue Nov 19 10:17:12 2024 00:16:53.101 read: IOPS=2695, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 00:16:53.101 slat (usec): min=14, max=186, avg=21.08, stdev= 7.34 00:16:53.101 clat (usec): min=13, max=540, avg=160.67, stdev=17.06 00:16:53.101 lat (usec): min=148, max=576, avg=181.75, stdev=19.02 00:16:53.101 clat percentiles (usec): 00:16:53.101 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 149], 00:16:53.101 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:16:53.101 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 186], 00:16:53.101 | 99.00th=[ 204], 99.50th=[ 217], 99.90th=[ 269], 99.95th=[ 498], 00:16:53.101 | 99.99th=[ 537] 00:16:53.101 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:53.101 slat (usec): min=19, max=116, avg=32.59, stdev=10.71 00:16:53.101 clat (usec): min=96, max=295, avg=128.90, stdev=12.62 00:16:53.101 lat (usec): min=120, max=412, avg=161.49, stdev=18.84 00:16:53.101 clat percentiles (usec): 00:16:53.101 | 1.00th=[ 106], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 119], 00:16:53.101 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 129], 60.00th=[ 131], 00:16:53.101 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 151], 00:16:53.101 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 198], 99.95th=[ 204], 00:16:53.101 | 99.99th=[ 297] 00:16:53.101 bw ( KiB/s): min=12288, max=12288, per=32.31%, avg=12288.00, stdev= 0.00, samples=1 00:16:53.101 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:53.101 lat (usec) : 20=0.02%, 100=0.05%, 250=99.84%, 500=0.07%, 750=0.02% 00:16:53.101 cpu : usr=3.70%, sys=11.00%, ctx=5771, majf=0, minf=7 00:16:53.101 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:53.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.101 issued rwts: total=2698,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.101 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:53.101 job1: (groupid=0, jobs=1): err= 0: pid=86865: Tue Nov 19 10:17:12 2024 00:16:53.101 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:53.101 slat (nsec): min=16100, max=72942, avg=29486.84, stdev=5262.61 00:16:53.101 clat (usec): min=212, max=1046, avg=283.79, stdev=51.99 00:16:53.101 lat (usec): min=244, max=1077, avg=313.27, stdev=52.59 00:16:53.101 clat percentiles (usec): 00:16:53.101 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 255], 00:16:53.101 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:16:53.101 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 326], 95.00th=[ 343], 00:16:53.101 | 99.00th=[ 537], 99.50th=[ 594], 99.90th=[ 1004], 99.95th=[ 1045], 00:16:53.101 | 99.99th=[ 1045] 00:16:53.101 write: IOPS=1943, BW=7772KiB/s (7959kB/s)(7780KiB/1001msec); 0 zone resets 00:16:53.101 slat (usec): min=21, max=120, avg=39.95, stdev= 8.33 00:16:53.101 clat (usec): min=101, max=1236, avg=221.12, stdev=47.43 00:16:53.101 lat (usec): min=138, max=1267, avg=261.07, stdev=47.65 00:16:53.101 clat percentiles (usec): 00:16:53.101 | 1.00th=[ 151], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 190], 00:16:53.101 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 215], 60.00th=[ 225], 00:16:53.101 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 265], 95.00th=[ 297], 00:16:53.101 | 99.00th=[ 355], 99.50th=[ 375], 99.90th=[ 914], 99.95th=[ 1237], 00:16:53.101 | 99.99th=[ 1237] 00:16:53.101 bw ( KiB/s): min= 8192, max= 8192, per=21.54%, avg=8192.00, stdev= 0.00, samples=1 00:16:53.101 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:53.101 lat (usec) : 250=51.91%, 500=47.46%, 750=0.49%, 1000=0.06% 00:16:53.101 lat (msec) : 2=0.09% 00:16:53.101 cpu : usr=2.40%, sys=9.40%, ctx=3481, majf=0, minf=13 00:16:53.101 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:53.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.101 issued rwts: total=1536,1945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.101 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:53.102 job2: (groupid=0, jobs=1): err= 0: pid=86866: Tue Nov 19 10:17:12 2024 00:16:53.102 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:53.102 slat (nsec): min=15481, max=80890, avg=26988.09, stdev=9679.06 00:16:53.102 clat (usec): min=206, max=1032, avg=286.38, stdev=48.02 00:16:53.102 lat (usec): min=230, max=1061, avg=313.37, stdev=49.75 00:16:53.102 clat percentiles (usec): 00:16:53.102 | 1.00th=[ 237], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 265], 00:16:53.102 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:16:53.102 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 318], 95.00th=[ 334], 00:16:53.102 | 99.00th=[ 553], 99.50th=[ 603], 99.90th=[ 840], 99.95th=[ 1037], 00:16:53.102 | 99.99th=[ 1037] 00:16:53.102 write: IOPS=1938, BW=7752KiB/s (7938kB/s)(7760KiB/1001msec); 0 zone resets 00:16:53.102 slat (usec): min=19, max=117, avg=41.91, stdev=10.72 00:16:53.102 clat (usec): min=123, max=1387, avg=220.02, stdev=51.56 00:16:53.102 lat (usec): min=153, max=1436, avg=261.93, stdev=52.83 00:16:53.102 clat percentiles (usec): 00:16:53.102 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:16:53.102 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 221], 00:16:53.102 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 265], 95.00th=[ 297], 00:16:53.102 | 99.00th=[ 355], 99.50th=[ 388], 99.90th=[ 947], 99.95th=[ 1385], 00:16:53.102 | 99.99th=[ 1385] 00:16:53.102 bw ( KiB/s): min= 8192, max= 8192, per=21.54%, avg=8192.00, stdev= 0.00, samples=1 00:16:53.102 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:53.102 lat (usec) : 250=49.91%, 500=49.34%, 750=0.60%, 1000=0.09% 00:16:53.102 lat (msec) : 2=0.06% 00:16:53.102 cpu : usr=2.30%, sys=9.20%, ctx=3489, majf=0, minf=13 00:16:53.102 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:53.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.102 issued rwts: total=1536,1940,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.102 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:53.102 job3: (groupid=0, jobs=1): err= 0: pid=86867: Tue Nov 19 10:17:12 2024 00:16:53.102 read: IOPS=2229, BW=8919KiB/s (9133kB/s)(8928KiB/1001msec) 00:16:53.102 slat (nsec): min=13167, max=51757, avg=19489.58, stdev=6567.23 00:16:53.102 clat (usec): min=140, max=2615, avg=208.24, stdev=58.02 00:16:53.102 lat (usec): min=154, max=2642, avg=227.73, stdev=57.14 00:16:53.102 clat percentiles (usec): 00:16:53.102 | 1.00th=[ 153], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 178], 00:16:53.102 | 30.00th=[ 194], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:16:53.102 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 247], 00:16:53.102 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 314], 99.95th=[ 318], 00:16:53.102 | 99.99th=[ 2606] 00:16:53.102 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:53.102 slat (usec): min=19, max=115, avg=28.05, stdev= 9.99 00:16:53.102 clat (usec): min=104, max=4019, avg=160.06, stdev=78.76 00:16:53.102 lat (usec): min=127, max=4058, avg=188.11, stdev=79.58 00:16:53.102 clat percentiles (usec): 00:16:53.102 | 1.00th=[ 119], 5.00th=[ 125], 10.00th=[ 133], 20.00th=[ 141], 00:16:53.102 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 165], 00:16:53.102 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 190], 00:16:53.102 | 99.00th=[ 204], 99.50th=[ 208], 99.90th=[ 249], 99.95th=[ 281], 00:16:53.102 | 99.99th=[ 4015] 00:16:53.102 bw ( KiB/s): min=10560, max=10560, per=27.77%, avg=10560.00, stdev= 0.00, samples=1 00:16:53.102 iops : min= 2640, max= 2640, avg=2640.00, stdev= 0.00, samples=1 00:16:53.102 lat (usec) : 250=98.02%, 500=1.94% 00:16:53.102 lat (msec) : 4=0.02%, 10=0.02% 00:16:53.102 cpu : usr=1.90%, sys=8.90%, ctx=4793, majf=0, minf=6 00:16:53.102 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:53.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.102 issued rwts: total=2232,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.102 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:53.102 00:16:53.102 Run status group 0 (all jobs): 00:16:53.102 READ: bw=31.2MiB/s (32.7MB/s), 6138KiB/s-10.5MiB/s (6285kB/s-11.0MB/s), io=31.3MiB (32.8MB), run=1001-1001msec 00:16:53.102 WRITE: bw=37.1MiB/s (38.9MB/s), 7752KiB/s-12.0MiB/s (7938kB/s-12.6MB/s), io=37.2MiB (39.0MB), run=1001-1001msec 00:16:53.102 00:16:53.102 Disk stats (read/write): 00:16:53.102 nvme0n1: ios=2403/2560, merge=0/0, ticks=428/366, in_queue=794, util=88.68% 00:16:53.102 nvme0n2: ios=1450/1536, merge=0/0, ticks=438/363, in_queue=801, util=87.64% 00:16:53.102 nvme0n3: ios=1430/1536, merge=0/0, ticks=427/362, in_queue=789, util=89.13% 00:16:53.102 nvme0n4: ios=2040/2048, merge=0/0, ticks=438/356, in_queue=794, util=89.78% 00:16:53.102 10:17:12 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:53.102 [global] 00:16:53.102 thread=1 00:16:53.102 invalidate=1 00:16:53.102 rw=randwrite 00:16:53.102 time_based=1 00:16:53.102 runtime=1 00:16:53.102 ioengine=libaio 00:16:53.102 direct=1 00:16:53.102 bs=4096 00:16:53.102 iodepth=1 00:16:53.102 norandommap=0 00:16:53.102 numjobs=1 00:16:53.102 00:16:53.102 verify_dump=1 00:16:53.102 verify_backlog=512 00:16:53.102 verify_state_save=0 00:16:53.102 do_verify=1 00:16:53.102 verify=crc32c-intel 00:16:53.102 [job0] 00:16:53.102 filename=/dev/nvme0n1 00:16:53.102 [job1] 00:16:53.102 filename=/dev/nvme0n2 00:16:53.102 [job2] 00:16:53.102 filename=/dev/nvme0n3 00:16:53.102 [job3] 00:16:53.102 filename=/dev/nvme0n4 00:16:53.102 Could not set queue depth (nvme0n1) 00:16:53.102 Could not set queue depth (nvme0n2) 00:16:53.102 Could not set queue depth (nvme0n3) 00:16:53.102 Could not set queue depth (nvme0n4) 00:16:53.102 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:53.102 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:53.102 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:53.102 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:53.102 fio-3.35 00:16:53.102 Starting 4 threads 00:16:54.493 00:16:54.493 job0: (groupid=0, jobs=1): err= 0: pid=86923: Tue Nov 19 10:17:13 2024 00:16:54.493 read: IOPS=2290, BW=9163KiB/s (9383kB/s)(9172KiB/1001msec) 00:16:54.493 slat (nsec): min=12508, max=67926, avg=16444.75, stdev=3437.15 00:16:54.493 clat (usec): min=135, max=688, avg=196.22, stdev=42.86 00:16:54.493 lat (usec): min=151, max=705, avg=212.67, stdev=43.21 00:16:54.493 clat percentiles (usec): 00:16:54.493 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 169], 00:16:54.493 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 194], 00:16:54.493 | 70.00th=[ 200], 80.00th=[ 210], 90.00th=[ 253], 95.00th=[ 273], 00:16:54.493 | 99.00th=[ 363], 99.50th=[ 375], 99.90th=[ 537], 99.95th=[ 619], 00:16:54.493 | 99.99th=[ 685] 00:16:54.493 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:54.493 slat (usec): min=11, max=106, avg=24.90, stdev= 6.73 00:16:54.493 clat (usec): min=106, max=1292, avg=171.56, stdev=57.62 00:16:54.493 lat (usec): min=125, max=1327, avg=196.47, stdev=58.72 00:16:54.493 clat percentiles (usec): 00:16:54.493 | 1.00th=[ 112], 5.00th=[ 118], 10.00th=[ 124], 20.00th=[ 133], 00:16:54.493 | 30.00th=[ 139], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 165], 00:16:54.493 | 70.00th=[ 186], 80.00th=[ 206], 90.00th=[ 251], 95.00th=[ 273], 00:16:54.493 | 99.00th=[ 318], 99.50th=[ 383], 99.90th=[ 693], 99.95th=[ 865], 00:16:54.493 | 99.99th=[ 1287] 00:16:54.493 bw ( KiB/s): min=11784, max=11784, per=30.57%, avg=11784.00, stdev= 0.00, samples=1 00:16:54.493 iops : min= 2946, max= 2946, avg=2946.00, stdev= 0.00, samples=1 00:16:54.493 lat (usec) : 250=89.53%, 500=10.32%, 750=0.10%, 1000=0.02% 00:16:54.493 lat (msec) : 2=0.02% 00:16:54.493 cpu : usr=2.30%, sys=7.30%, ctx=4854, majf=0, minf=7 00:16:54.493 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.493 issued rwts: total=2293,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.493 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.493 job1: (groupid=0, jobs=1): err= 0: pid=86924: Tue Nov 19 10:17:13 2024 00:16:54.493 read: IOPS=2753, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:16:54.493 slat (nsec): min=12687, max=47655, avg=14653.95, stdev=2293.58 00:16:54.493 clat (usec): min=131, max=1648, avg=172.79, stdev=46.11 00:16:54.493 lat (usec): min=145, max=1661, avg=187.45, stdev=46.47 00:16:54.493 clat percentiles (usec): 00:16:54.493 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:16:54.493 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 172], 00:16:54.493 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 215], 00:16:54.493 | 99.00th=[ 310], 99.50th=[ 375], 99.90th=[ 586], 99.95th=[ 1090], 00:16:54.493 | 99.99th=[ 1647] 00:16:54.493 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:54.493 slat (usec): min=18, max=101, avg=21.95, stdev= 5.01 00:16:54.493 clat (usec): min=96, max=374, avg=132.23, stdev=18.81 00:16:54.493 lat (usec): min=116, max=413, avg=154.17, stdev=19.79 00:16:54.493 clat percentiles (usec): 00:16:54.493 | 1.00th=[ 104], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 118], 00:16:54.493 | 30.00th=[ 122], 40.00th=[ 126], 50.00th=[ 130], 60.00th=[ 135], 00:16:54.493 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 165], 00:16:54.493 | 99.00th=[ 188], 99.50th=[ 206], 99.90th=[ 247], 99.95th=[ 343], 00:16:54.493 | 99.99th=[ 375] 00:16:54.493 bw ( KiB/s): min=12288, max=12288, per=31.88%, avg=12288.00, stdev= 0.00, samples=1 00:16:54.493 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:54.493 lat (usec) : 100=0.05%, 250=98.80%, 500=1.10%, 750=0.02% 00:16:54.493 lat (msec) : 2=0.03% 00:16:54.493 cpu : usr=2.40%, sys=7.60%, ctx=5829, majf=0, minf=11 00:16:54.493 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.493 issued rwts: total=2756,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.493 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.494 job2: (groupid=0, jobs=1): err= 0: pid=86925: Tue Nov 19 10:17:13 2024 00:16:54.494 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:16:54.494 slat (nsec): min=11618, max=85847, avg=26782.82, stdev=9206.40 00:16:54.494 clat (usec): min=172, max=3191, avg=286.96, stdev=110.35 00:16:54.494 lat (usec): min=190, max=3215, avg=313.74, stdev=109.94 00:16:54.494 clat percentiles (usec): 00:16:54.494 | 1.00th=[ 235], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 262], 00:16:54.494 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:16:54.494 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 338], 00:16:54.494 | 99.00th=[ 420], 99.50th=[ 465], 99.90th=[ 2835], 99.95th=[ 3195], 00:16:54.494 | 99.99th=[ 3195] 00:16:54.494 write: IOPS=1963, BW=7852KiB/s (8041kB/s)(7860KiB/1001msec); 0 zone resets 00:16:54.494 slat (usec): min=11, max=107, avg=33.24, stdev=10.43 00:16:54.494 clat (usec): min=115, max=7861, avg=225.44, stdev=181.65 00:16:54.494 lat (usec): min=141, max=7893, avg=258.67, stdev=181.44 00:16:54.494 clat percentiles (usec): 00:16:54.494 | 1.00th=[ 135], 5.00th=[ 178], 10.00th=[ 190], 20.00th=[ 200], 00:16:54.494 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 225], 00:16:54.494 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 258], 95.00th=[ 269], 00:16:54.494 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 2114], 99.95th=[ 7832], 00:16:54.494 | 99.99th=[ 7832] 00:16:54.494 bw ( KiB/s): min= 8192, max= 8192, per=21.26%, avg=8192.00, stdev= 0.00, samples=1 00:16:54.494 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:54.494 lat (usec) : 250=51.24%, 500=48.53%, 750=0.06% 00:16:54.494 lat (msec) : 2=0.06%, 4=0.09%, 10=0.03% 00:16:54.494 cpu : usr=1.80%, sys=8.60%, ctx=3501, majf=0, minf=15 00:16:54.494 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.494 issued rwts: total=1536,1965,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.494 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.494 job3: (groupid=0, jobs=1): err= 0: pid=86926: Tue Nov 19 10:17:13 2024 00:16:54.494 read: IOPS=2007, BW=8032KiB/s (8225kB/s)(8040KiB/1001msec) 00:16:54.494 slat (nsec): min=13288, max=69693, avg=18305.16, stdev=5676.90 00:16:54.494 clat (usec): min=142, max=584, avg=240.60, stdev=65.72 00:16:54.494 lat (usec): min=156, max=599, avg=258.91, stdev=67.65 00:16:54.494 clat percentiles (usec): 00:16:54.494 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 165], 00:16:54.494 | 30.00th=[ 174], 40.00th=[ 194], 50.00th=[ 273], 60.00th=[ 281], 00:16:54.494 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 318], 00:16:54.494 | 99.00th=[ 383], 99.50th=[ 412], 99.90th=[ 465], 99.95th=[ 465], 00:16:54.494 | 99.99th=[ 586] 00:16:54.494 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:54.494 slat (usec): min=19, max=112, avg=29.20, stdev= 8.39 00:16:54.494 clat (usec): min=105, max=670, avg=200.51, stdev=43.47 00:16:54.494 lat (usec): min=126, max=712, avg=229.71, stdev=45.90 00:16:54.494 clat percentiles (usec): 00:16:54.494 | 1.00th=[ 115], 5.00th=[ 123], 10.00th=[ 130], 20.00th=[ 149], 00:16:54.494 | 30.00th=[ 190], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 221], 00:16:54.494 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 245], 95.00th=[ 253], 00:16:54.494 | 99.00th=[ 269], 99.50th=[ 289], 99.90th=[ 302], 99.95th=[ 449], 00:16:54.494 | 99.99th=[ 668] 00:16:54.494 bw ( KiB/s): min= 8192, max= 8192, per=21.26%, avg=8192.00, stdev= 0.00, samples=1 00:16:54.494 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:54.494 lat (usec) : 250=68.33%, 500=31.62%, 750=0.05% 00:16:54.494 cpu : usr=1.90%, sys=7.20%, ctx=4062, majf=0, minf=13 00:16:54.494 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:54.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.494 issued rwts: total=2010,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.494 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:54.494 00:16:54.494 Run status group 0 (all jobs): 00:16:54.494 READ: bw=33.5MiB/s (35.2MB/s), 6138KiB/s-10.8MiB/s (6285kB/s-11.3MB/s), io=33.6MiB (35.2MB), run=1001-1001msec 00:16:54.494 WRITE: bw=37.6MiB/s (39.5MB/s), 7852KiB/s-12.0MiB/s (8041kB/s-12.6MB/s), io=37.7MiB (39.5MB), run=1001-1001msec 00:16:54.494 00:16:54.494 Disk stats (read/write): 00:16:54.494 nvme0n1: ios=2098/2302, merge=0/0, ticks=438/389, in_queue=827, util=88.08% 00:16:54.494 nvme0n2: ios=2456/2560, merge=0/0, ticks=436/366, in_queue=802, util=87.78% 00:16:54.494 nvme0n3: ios=1431/1536, merge=0/0, ticks=413/360, in_queue=773, util=88.21% 00:16:54.494 nvme0n4: ios=1536/1744, merge=0/0, ticks=415/385, in_queue=800, util=89.68% 00:16:54.494 10:17:13 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:54.494 [global] 00:16:54.494 thread=1 00:16:54.494 invalidate=1 00:16:54.494 rw=write 00:16:54.494 time_based=1 00:16:54.494 runtime=1 00:16:54.494 ioengine=libaio 00:16:54.494 direct=1 00:16:54.494 bs=4096 00:16:54.494 iodepth=128 00:16:54.494 norandommap=0 00:16:54.494 numjobs=1 00:16:54.494 00:16:54.494 verify_dump=1 00:16:54.494 verify_backlog=512 00:16:54.494 verify_state_save=0 00:16:54.494 do_verify=1 00:16:54.494 verify=crc32c-intel 00:16:54.494 [job0] 00:16:54.494 filename=/dev/nvme0n1 00:16:54.494 [job1] 00:16:54.494 filename=/dev/nvme0n2 00:16:54.494 [job2] 00:16:54.494 filename=/dev/nvme0n3 00:16:54.494 [job3] 00:16:54.494 filename=/dev/nvme0n4 00:16:54.494 Could not set queue depth (nvme0n1) 00:16:54.494 Could not set queue depth (nvme0n2) 00:16:54.494 Could not set queue depth (nvme0n3) 00:16:54.494 Could not set queue depth (nvme0n4) 00:16:54.494 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.494 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.494 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.494 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:54.494 fio-3.35 00:16:54.494 Starting 4 threads 00:16:55.871 00:16:55.871 job0: (groupid=0, jobs=1): err= 0: pid=86980: Tue Nov 19 10:17:15 2024 00:16:55.871 read: IOPS=5890, BW=23.0MiB/s (24.1MB/s)(23.1MiB/1002msec) 00:16:55.871 slat (usec): min=5, max=5068, avg=79.09, stdev=426.38 00:16:55.871 clat (usec): min=822, max=15459, avg=10318.59, stdev=1358.31 00:16:55.871 lat (usec): min=2892, max=15616, avg=10397.68, stdev=1385.32 00:16:55.871 clat percentiles (usec): 00:16:55.871 | 1.00th=[ 5735], 5.00th=[ 7898], 10.00th=[ 9110], 20.00th=[ 9896], 00:16:55.871 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:16:55.871 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11469], 95.00th=[12518], 00:16:55.871 | 99.00th=[14222], 99.50th=[14615], 99.90th=[15139], 99.95th=[15401], 00:16:55.871 | 99.99th=[15401] 00:16:55.871 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:16:55.871 slat (usec): min=9, max=4649, avg=79.32, stdev=388.39 00:16:55.871 clat (usec): min=6008, max=15664, avg=10706.90, stdev=1326.03 00:16:55.871 lat (usec): min=6029, max=15682, avg=10786.21, stdev=1302.90 00:16:55.871 clat percentiles (usec): 00:16:55.871 | 1.00th=[ 6849], 5.00th=[ 7373], 10.00th=[ 8979], 20.00th=[10290], 00:16:55.871 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:16:55.871 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:16:55.871 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15664], 99.95th=[15664], 00:16:55.871 | 99.99th=[15664] 00:16:55.871 bw ( KiB/s): min=24576, max=24576, per=35.80%, avg=24576.00, stdev= 0.00, samples=2 00:16:55.871 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:16:55.871 lat (usec) : 1000=0.01% 00:16:55.871 lat (msec) : 4=0.26%, 10=20.65%, 20=79.09% 00:16:55.871 cpu : usr=5.39%, sys=15.58%, ctx=581, majf=0, minf=10 00:16:55.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:55.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.871 issued rwts: total=5902,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.871 job1: (groupid=0, jobs=1): err= 0: pid=86981: Tue Nov 19 10:17:15 2024 00:16:55.871 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:16:55.871 slat (usec): min=4, max=2767, avg=80.22, stdev=344.26 00:16:55.871 clat (usec): min=8333, max=13932, avg=10923.71, stdev=855.74 00:16:55.871 lat (usec): min=8530, max=15052, avg=11003.93, stdev=803.74 00:16:55.871 clat percentiles (usec): 00:16:55.871 | 1.00th=[ 8848], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[10552], 00:16:55.871 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:16:55.871 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12125], 00:16:55.871 | 99.00th=[12911], 99.50th=[13042], 99.90th=[13960], 99.95th=[13960], 00:16:55.871 | 99.99th=[13960] 00:16:55.871 write: IOPS=5924, BW=23.1MiB/s (24.3MB/s)(23.2MiB/1002msec); 0 zone resets 00:16:55.871 slat (usec): min=9, max=2835, avg=84.79, stdev=333.67 00:16:55.871 clat (usec): min=1935, max=13438, avg=10971.48, stdev=1302.53 00:16:55.871 lat (usec): min=1951, max=13504, avg=11056.27, stdev=1294.54 00:16:55.871 clat percentiles (usec): 00:16:55.871 | 1.00th=[ 5538], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:16:55.871 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:16:55.871 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12518], 00:16:55.871 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13435], 99.95th=[13435], 00:16:55.871 | 99.99th=[13435] 00:16:55.871 bw ( KiB/s): min=21896, max=24625, per=33.88%, avg=23260.50, stdev=1929.69, samples=2 00:16:55.871 iops : min= 5474, max= 6156, avg=5815.00, stdev=482.25, samples=2 00:16:55.871 lat (msec) : 2=0.03%, 4=0.28%, 10=19.03%, 20=80.66% 00:16:55.871 cpu : usr=5.99%, sys=14.69%, ctx=888, majf=0, minf=5 00:16:55.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:55.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.871 issued rwts: total=5632,5936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.871 job2: (groupid=0, jobs=1): err= 0: pid=86982: Tue Nov 19 10:17:15 2024 00:16:55.871 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:16:55.871 slat (usec): min=6, max=11938, avg=179.47, stdev=1050.65 00:16:55.871 clat (usec): min=12175, max=37519, avg=21815.05, stdev=4120.41 00:16:55.871 lat (usec): min=12196, max=37542, avg=21994.52, stdev=4229.35 00:16:55.871 clat percentiles (usec): 00:16:55.871 | 1.00th=[14746], 5.00th=[17433], 10.00th=[17695], 20.00th=[17957], 00:16:55.871 | 30.00th=[18482], 40.00th=[19006], 50.00th=[21365], 60.00th=[23200], 00:16:55.871 | 70.00th=[24773], 80.00th=[25560], 90.00th=[26084], 95.00th=[28967], 00:16:55.871 | 99.00th=[34866], 99.50th=[34866], 99.90th=[36963], 99.95th=[37487], 00:16:55.871 | 99.99th=[37487] 00:16:55.871 write: IOPS=2647, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1006msec); 0 zone resets 00:16:55.871 slat (usec): min=14, max=10652, avg=195.50, stdev=791.74 00:16:55.871 clat (usec): min=4433, max=41447, avg=26664.32, stdev=4835.25 00:16:55.871 lat (usec): min=7902, max=41518, avg=26859.83, stdev=4860.73 00:16:55.871 clat percentiles (usec): 00:16:55.871 | 1.00th=[12911], 5.00th=[21365], 10.00th=[21890], 20.00th=[23462], 00:16:55.871 | 30.00th=[23987], 40.00th=[24511], 50.00th=[25560], 60.00th=[26346], 00:16:55.871 | 70.00th=[28443], 80.00th=[31327], 90.00th=[33817], 95.00th=[34866], 00:16:55.871 | 99.00th=[38011], 99.50th=[38011], 99.90th=[41681], 99.95th=[41681], 00:16:55.871 | 99.99th=[41681] 00:16:55.871 bw ( KiB/s): min= 8712, max=11791, per=14.93%, avg=10251.50, stdev=2177.18, samples=2 00:16:55.871 iops : min= 2178, max= 2947, avg=2562.50, stdev=543.77, samples=2 00:16:55.871 lat (msec) : 10=0.33%, 20=23.28%, 50=76.39% 00:16:55.871 cpu : usr=2.19%, sys=8.06%, ctx=349, majf=0, minf=5 00:16:55.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:55.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.871 issued rwts: total=2560,2663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.871 job3: (groupid=0, jobs=1): err= 0: pid=86983: Tue Nov 19 10:17:15 2024 00:16:55.871 read: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec) 00:16:55.871 slat (usec): min=5, max=17885, avg=206.58, stdev=1061.24 00:16:55.871 clat (usec): min=17017, max=43210, avg=26966.77, stdev=4176.45 00:16:55.871 lat (usec): min=21563, max=47631, avg=27173.35, stdev=4107.41 00:16:55.871 clat percentiles (usec): 00:16:55.871 | 1.00th=[19268], 5.00th=[22152], 10.00th=[22676], 20.00th=[23987], 00:16:55.871 | 30.00th=[24511], 40.00th=[25560], 50.00th=[25822], 60.00th=[26346], 00:16:55.871 | 70.00th=[27919], 80.00th=[30278], 90.00th=[32375], 95.00th=[35914], 00:16:55.871 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:55.871 | 99.99th=[43254] 00:16:55.871 write: IOPS=2514, BW=9.82MiB/s (10.3MB/s)(9.85MiB/1003msec); 0 zone resets 00:16:55.871 slat (usec): min=12, max=12936, avg=221.12, stdev=1179.99 00:16:55.871 clat (usec): min=1374, max=50674, avg=27953.89, stdev=10380.33 00:16:55.871 lat (usec): min=4948, max=50754, avg=28175.01, stdev=10383.78 00:16:55.871 clat percentiles (usec): 00:16:55.871 | 1.00th=[ 5800], 5.00th=[16188], 10.00th=[17957], 20.00th=[19006], 00:16:55.871 | 30.00th=[21365], 40.00th=[22938], 50.00th=[24511], 60.00th=[25560], 00:16:55.871 | 70.00th=[33424], 80.00th=[37487], 90.00th=[44827], 95.00th=[46924], 00:16:55.871 | 99.00th=[50594], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:16:55.871 | 99.99th=[50594] 00:16:55.871 bw ( KiB/s): min= 8208, max=10960, per=13.96%, avg=9584.00, stdev=1945.96, samples=2 00:16:55.871 iops : min= 2052, max= 2740, avg=2396.00, stdev=486.49, samples=2 00:16:55.871 lat (msec) : 2=0.02%, 10=0.74%, 20=13.92%, 50=84.64%, 100=0.68% 00:16:55.871 cpu : usr=2.79%, sys=7.29%, ctx=173, majf=0, minf=17 00:16:55.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:55.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:55.871 issued rwts: total=2048,2522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:55.871 00:16:55.871 Run status group 0 (all jobs): 00:16:55.872 READ: bw=62.7MiB/s (65.7MB/s), 8167KiB/s-23.0MiB/s (8364kB/s-24.1MB/s), io=63.1MiB (66.1MB), run=1002-1006msec 00:16:55.872 WRITE: bw=67.0MiB/s (70.3MB/s), 9.82MiB/s-24.0MiB/s (10.3MB/s-25.1MB/s), io=67.4MiB (70.7MB), run=1002-1006msec 00:16:55.872 00:16:55.872 Disk stats (read/write): 00:16:55.872 nvme0n1: ios=5170/5204, merge=0/0, ticks=24734/24080, in_queue=48814, util=89.08% 00:16:55.872 nvme0n2: ios=4845/5120, merge=0/0, ticks=12152/11808, in_queue=23960, util=88.17% 00:16:55.872 nvme0n3: ios=2048/2391, merge=0/0, ticks=21340/30564, in_queue=51904, util=88.97% 00:16:55.872 nvme0n4: ios=1792/2048, merge=0/0, ticks=11899/13826, in_queue=25725, util=89.72% 00:16:55.872 10:17:15 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:55.872 [global] 00:16:55.872 thread=1 00:16:55.872 invalidate=1 00:16:55.872 rw=randwrite 00:16:55.872 time_based=1 00:16:55.872 runtime=1 00:16:55.872 ioengine=libaio 00:16:55.872 direct=1 00:16:55.872 bs=4096 00:16:55.872 iodepth=128 00:16:55.872 norandommap=0 00:16:55.872 numjobs=1 00:16:55.872 00:16:55.872 verify_dump=1 00:16:55.872 verify_backlog=512 00:16:55.872 verify_state_save=0 00:16:55.872 do_verify=1 00:16:55.872 verify=crc32c-intel 00:16:55.872 [job0] 00:16:55.872 filename=/dev/nvme0n1 00:16:55.872 [job1] 00:16:55.872 filename=/dev/nvme0n2 00:16:55.872 [job2] 00:16:55.872 filename=/dev/nvme0n3 00:16:55.872 [job3] 00:16:55.872 filename=/dev/nvme0n4 00:16:55.872 Could not set queue depth (nvme0n1) 00:16:55.872 Could not set queue depth (nvme0n2) 00:16:55.872 Could not set queue depth (nvme0n3) 00:16:55.872 Could not set queue depth (nvme0n4) 00:16:55.872 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.872 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.872 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.872 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:55.872 fio-3.35 00:16:55.872 Starting 4 threads 00:16:57.248 00:16:57.248 job0: (groupid=0, jobs=1): err= 0: pid=87042: Tue Nov 19 10:17:16 2024 00:16:57.248 read: IOPS=6008, BW=23.5MiB/s (24.6MB/s)(23.5MiB/1003msec) 00:16:57.248 slat (usec): min=2, max=4598, avg=78.00, stdev=396.84 00:16:57.248 clat (usec): min=2184, max=14868, avg=10358.74, stdev=1375.32 00:16:57.248 lat (usec): min=2194, max=15225, avg=10436.74, stdev=1392.46 00:16:57.248 clat percentiles (usec): 00:16:57.248 | 1.00th=[ 6456], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[ 9503], 00:16:57.248 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:16:57.248 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11863], 95.00th=[12518], 00:16:57.248 | 99.00th=[13960], 99.50th=[14353], 99.90th=[14746], 99.95th=[14746], 00:16:57.248 | 99.99th=[14877] 00:16:57.248 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:16:57.248 slat (usec): min=10, max=4418, avg=78.75, stdev=398.63 00:16:57.248 clat (usec): min=6064, max=15277, avg=10484.72, stdev=1203.24 00:16:57.248 lat (usec): min=6110, max=15292, avg=10563.48, stdev=1184.80 00:16:57.248 clat percentiles (usec): 00:16:57.248 | 1.00th=[ 6718], 5.00th=[ 7439], 10.00th=[ 9241], 20.00th=[10028], 00:16:57.248 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:16:57.248 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:16:57.248 | 99.00th=[13960], 99.50th=[14353], 99.90th=[15270], 99.95th=[15270], 00:16:57.248 | 99.99th=[15270] 00:16:57.248 bw ( KiB/s): min=24576, max=24576, per=37.89%, avg=24576.00, stdev= 0.00, samples=2 00:16:57.248 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:16:57.248 lat (msec) : 4=0.07%, 10=25.85%, 20=74.08% 00:16:57.248 cpu : usr=4.89%, sys=15.77%, ctx=556, majf=0, minf=17 00:16:57.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:57.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:57.248 issued rwts: total=6027,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:57.248 job1: (groupid=0, jobs=1): err= 0: pid=87043: Tue Nov 19 10:17:16 2024 00:16:57.248 read: IOPS=2021, BW=8086KiB/s (8280kB/s)(8312KiB/1028msec) 00:16:57.248 slat (usec): min=2, max=20905, avg=187.87, stdev=1264.24 00:16:57.248 clat (usec): min=5738, max=45399, avg=22484.63, stdev=7454.63 00:16:57.248 lat (usec): min=5748, max=45442, avg=22672.50, stdev=7516.84 00:16:57.248 clat percentiles (usec): 00:16:57.248 | 1.00th=[ 9765], 5.00th=[10159], 10.00th=[12256], 20.00th=[13960], 00:16:57.248 | 30.00th=[20055], 40.00th=[21890], 50.00th=[22414], 60.00th=[22938], 00:16:57.248 | 70.00th=[24249], 80.00th=[28443], 90.00th=[33162], 95.00th=[35390], 00:16:57.248 | 99.00th=[42206], 99.50th=[43779], 99.90th=[45351], 99.95th=[45351], 00:16:57.248 | 99.99th=[45351] 00:16:57.248 write: IOPS=2490, BW=9961KiB/s (10.2MB/s)(10.0MiB/1028msec); 0 zone resets 00:16:57.248 slat (usec): min=4, max=17938, avg=231.40, stdev=1139.88 00:16:57.248 clat (msec): min=4, max=143, avg=32.61, stdev=22.76 00:16:57.248 lat (msec): min=4, max=143, avg=32.85, stdev=22.88 00:16:57.248 clat percentiles (msec): 00:16:57.248 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 18], 20.00th=[ 23], 00:16:57.248 | 30.00th=[ 24], 40.00th=[ 26], 50.00th=[ 27], 60.00th=[ 29], 00:16:57.248 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 57], 95.00th=[ 79], 00:16:57.248 | 99.00th=[ 134], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:16:57.248 | 99.99th=[ 144] 00:16:57.248 bw ( KiB/s): min= 7432, max=12272, per=15.19%, avg=9852.00, stdev=3422.40, samples=2 00:16:57.248 iops : min= 1858, max= 3068, avg=2463.00, stdev=855.60, samples=2 00:16:57.248 lat (msec) : 10=5.13%, 20=14.27%, 50=74.19%, 100=4.36%, 250=2.05% 00:16:57.248 cpu : usr=2.14%, sys=5.06%, ctx=322, majf=0, minf=12 00:16:57.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:16:57.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:57.248 issued rwts: total=2078,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:57.248 job2: (groupid=0, jobs=1): err= 0: pid=87044: Tue Nov 19 10:17:16 2024 00:16:57.248 read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec) 00:16:57.248 slat (usec): min=3, max=12807, avg=107.98, stdev=721.84 00:16:57.248 clat (usec): min=4515, max=27529, avg=14023.99, stdev=3599.13 00:16:57.248 lat (usec): min=4529, max=27568, avg=14131.97, stdev=3643.44 00:16:57.248 clat percentiles (usec): 00:16:57.248 | 1.00th=[ 6783], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11338], 00:16:57.248 | 30.00th=[11863], 40.00th=[12387], 50.00th=[13435], 60.00th=[14222], 00:16:57.248 | 70.00th=[15008], 80.00th=[16581], 90.00th=[18744], 95.00th=[21627], 00:16:57.248 | 99.00th=[25297], 99.50th=[26608], 99.90th=[27395], 99.95th=[27395], 00:16:57.248 | 99.99th=[27657] 00:16:57.248 write: IOPS=4957, BW=19.4MiB/s (20.3MB/s)(19.6MiB/1013msec); 0 zone resets 00:16:57.248 slat (usec): min=4, max=12978, avg=92.89, stdev=577.32 00:16:57.248 clat (usec): min=3665, max=27889, avg=12692.20, stdev=3138.67 00:16:57.248 lat (usec): min=3694, max=27977, avg=12785.09, stdev=3196.60 00:16:57.248 clat percentiles (usec): 00:16:57.249 | 1.00th=[ 4752], 5.00th=[ 6521], 10.00th=[ 8455], 20.00th=[10814], 00:16:57.249 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12911], 60.00th=[13173], 00:16:57.249 | 70.00th=[13960], 80.00th=[15270], 90.00th=[15795], 95.00th=[16319], 00:16:57.249 | 99.00th=[22676], 99.50th=[25822], 99.90th=[27395], 99.95th=[27395], 00:16:57.249 | 99.99th=[27919] 00:16:57.249 bw ( KiB/s): min=17864, max=21296, per=30.19%, avg=19580.00, stdev=2426.79, samples=2 00:16:57.249 iops : min= 4466, max= 5324, avg=4895.00, stdev=606.70, samples=2 00:16:57.249 lat (msec) : 4=0.06%, 10=11.01%, 20=84.11%, 50=4.82% 00:16:57.249 cpu : usr=3.56%, sys=12.06%, ctx=538, majf=0, minf=10 00:16:57.249 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:57.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:57.249 issued rwts: total=4608,5022,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:57.249 job3: (groupid=0, jobs=1): err= 0: pid=87045: Tue Nov 19 10:17:16 2024 00:16:57.249 read: IOPS=2485, BW=9942KiB/s (10.2MB/s)(10.0MiB/1030msec) 00:16:57.249 slat (usec): min=3, max=24378, avg=190.98, stdev=1357.70 00:16:57.249 clat (usec): min=6131, max=68919, avg=22011.81, stdev=10812.14 00:16:57.249 lat (usec): min=6160, max=68937, avg=22202.79, stdev=10925.06 00:16:57.249 clat percentiles (usec): 00:16:57.249 | 1.00th=[ 7177], 5.00th=[10814], 10.00th=[11994], 20.00th=[13304], 00:16:57.249 | 30.00th=[16188], 40.00th=[16581], 50.00th=[17695], 60.00th=[21890], 00:16:57.249 | 70.00th=[23725], 80.00th=[27657], 90.00th=[35914], 95.00th=[47449], 00:16:57.249 | 99.00th=[60031], 99.50th=[61080], 99.90th=[68682], 99.95th=[68682], 00:16:57.249 | 99.99th=[68682] 00:16:57.249 write: IOPS=2887, BW=11.3MiB/s (11.8MB/s)(11.6MiB/1030msec); 0 zone resets 00:16:57.249 slat (usec): min=4, max=29873, avg=164.61, stdev=1013.07 00:16:57.249 clat (usec): min=4002, max=69165, avg=24923.18, stdev=11524.25 00:16:57.249 lat (usec): min=4027, max=69193, avg=25087.78, stdev=11593.98 00:16:57.249 clat percentiles (usec): 00:16:57.249 | 1.00th=[ 5145], 5.00th=[ 9896], 10.00th=[13173], 20.00th=[14746], 00:16:57.249 | 30.00th=[20055], 40.00th=[22152], 50.00th=[24511], 60.00th=[25822], 00:16:57.249 | 70.00th=[28705], 80.00th=[31065], 90.00th=[33424], 95.00th=[46924], 00:16:57.249 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:16:57.249 | 99.99th=[68682] 00:16:57.249 bw ( KiB/s): min=10632, max=12144, per=17.56%, avg=11388.00, stdev=1069.15, samples=2 00:16:57.249 iops : min= 2658, max= 3036, avg=2847.00, stdev=267.29, samples=2 00:16:57.249 lat (msec) : 10=3.32%, 20=37.31%, 50=55.24%, 100=4.12% 00:16:57.249 cpu : usr=2.24%, sys=7.29%, ctx=321, majf=0, minf=13 00:16:57.249 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:57.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:57.249 issued rwts: total=2560,2974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:57.249 00:16:57.249 Run status group 0 (all jobs): 00:16:57.249 READ: bw=57.9MiB/s (60.7MB/s), 8086KiB/s-23.5MiB/s (8280kB/s-24.6MB/s), io=59.7MiB (62.6MB), run=1003-1030msec 00:16:57.249 WRITE: bw=63.3MiB/s (66.4MB/s), 9961KiB/s-23.9MiB/s (10.2MB/s-25.1MB/s), io=65.2MiB (68.4MB), run=1003-1030msec 00:16:57.249 00:16:57.249 Disk stats (read/write): 00:16:57.249 nvme0n1: ios=5170/5247, merge=0/0, ticks=24693/22627, in_queue=47320, util=87.86% 00:16:57.249 nvme0n2: ios=1825/2048, merge=0/0, ticks=41046/64040, in_queue=105086, util=87.61% 00:16:57.249 nvme0n3: ios=4096/4159, merge=0/0, ticks=53174/48287, in_queue=101461, util=88.87% 00:16:57.249 nvme0n4: ios=2048/2455, merge=0/0, ticks=45392/54768, in_queue=100160, util=89.61% 00:16:57.249 10:17:16 -- target/fio.sh@55 -- # sync 00:16:57.249 10:17:16 -- target/fio.sh@59 -- # fio_pid=87058 00:16:57.249 10:17:16 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:57.249 10:17:16 -- target/fio.sh@61 -- # sleep 3 00:16:57.249 [global] 00:16:57.249 thread=1 00:16:57.249 invalidate=1 00:16:57.249 rw=read 00:16:57.249 time_based=1 00:16:57.249 runtime=10 00:16:57.249 ioengine=libaio 00:16:57.249 direct=1 00:16:57.249 bs=4096 00:16:57.249 iodepth=1 00:16:57.249 norandommap=1 00:16:57.249 numjobs=1 00:16:57.249 00:16:57.249 [job0] 00:16:57.249 filename=/dev/nvme0n1 00:16:57.249 [job1] 00:16:57.249 filename=/dev/nvme0n2 00:16:57.249 [job2] 00:16:57.249 filename=/dev/nvme0n3 00:16:57.249 [job3] 00:16:57.249 filename=/dev/nvme0n4 00:16:57.249 Could not set queue depth (nvme0n1) 00:16:57.249 Could not set queue depth (nvme0n2) 00:16:57.249 Could not set queue depth (nvme0n3) 00:16:57.249 Could not set queue depth (nvme0n4) 00:16:57.249 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:57.249 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:57.249 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:57.249 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:57.249 fio-3.35 00:16:57.249 Starting 4 threads 00:17:00.531 10:17:19 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:00.531 fio: pid=87101, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:00.531 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=63696896, buflen=4096 00:17:00.531 10:17:19 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:00.789 fio: pid=87100, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:00.789 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=60600320, buflen=4096 00:17:00.789 10:17:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:00.789 10:17:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:01.047 fio: pid=87098, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:01.047 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=49737728, buflen=4096 00:17:01.047 10:17:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:01.047 10:17:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:01.306 fio: pid=87099, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:17:01.306 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=65343488, buflen=4096 00:17:01.306 10:17:20 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:01.306 10:17:20 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:01.306 00:17:01.306 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87098: Tue Nov 19 10:17:20 2024 00:17:01.306 read: IOPS=3409, BW=13.3MiB/s (14.0MB/s)(47.4MiB/3562msec) 00:17:01.306 slat (usec): min=8, max=15863, avg=24.04, stdev=228.83 00:17:01.306 clat (usec): min=96, max=57978, avg=267.22, stdev=532.68 00:17:01.306 lat (usec): min=148, max=57991, avg=291.26, stdev=579.94 00:17:01.306 clat percentiles (usec): 00:17:01.306 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 174], 20.00th=[ 245], 00:17:01.306 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:17:01.306 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 318], 00:17:01.306 | 99.00th=[ 420], 99.50th=[ 490], 99.90th=[ 1336], 99.95th=[ 2573], 00:17:01.306 | 99.99th=[ 4293] 00:17:01.306 bw ( KiB/s): min=13344, max=13656, per=22.36%, avg=13469.33, stdev=124.00, samples=6 00:17:01.306 iops : min= 3336, max= 3414, avg=3367.33, stdev=31.00, samples=6 00:17:01.306 lat (usec) : 100=0.01%, 250=24.38%, 500=75.14%, 750=0.30%, 1000=0.06% 00:17:01.306 lat (msec) : 2=0.02%, 4=0.06%, 10=0.02%, 100=0.01% 00:17:01.306 cpu : usr=1.46%, sys=5.59%, ctx=12192, majf=0, minf=1 00:17:01.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:01.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.306 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.306 issued rwts: total=12144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:01.306 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87099: Tue Nov 19 10:17:20 2024 00:17:01.306 read: IOPS=4111, BW=16.1MiB/s (16.8MB/s)(62.3MiB/3880msec) 00:17:01.306 slat (usec): min=12, max=11547, avg=21.06, stdev=182.95 00:17:01.306 clat (usec): min=126, max=4683, avg=220.59, stdev=97.81 00:17:01.306 lat (usec): min=140, max=11803, avg=241.64, stdev=207.84 00:17:01.306 clat percentiles (usec): 00:17:01.306 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 151], 00:17:01.306 | 30.00th=[ 159], 40.00th=[ 172], 50.00th=[ 243], 60.00th=[ 262], 00:17:01.306 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:17:01.306 | 99.00th=[ 396], 99.50th=[ 441], 99.90th=[ 898], 99.95th=[ 1795], 00:17:01.306 | 99.99th=[ 4490] 00:17:01.306 bw ( KiB/s): min=13152, max=22256, per=26.63%, avg=16043.71, stdev=3751.99, samples=7 00:17:01.306 iops : min= 3288, max= 5564, avg=4010.86, stdev=937.90, samples=7 00:17:01.306 lat (usec) : 250=51.57%, 500=48.03%, 750=0.25%, 1000=0.08% 00:17:01.306 lat (msec) : 2=0.04%, 4=0.01%, 10=0.02% 00:17:01.306 cpu : usr=1.03%, sys=6.19%, ctx=15967, majf=0, minf=1 00:17:01.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:01.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.306 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.306 issued rwts: total=15954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:01.306 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87100: Tue Nov 19 10:17:20 2024 00:17:01.306 read: IOPS=4541, BW=17.7MiB/s (18.6MB/s)(57.8MiB/3258msec) 00:17:01.306 slat (usec): min=7, max=15748, avg=17.28, stdev=144.22 00:17:01.306 clat (usec): min=59, max=2797, avg=201.17, stdev=62.35 00:17:01.306 lat (usec): min=164, max=15989, avg=218.45, stdev=157.53 00:17:01.306 clat percentiles (usec): 00:17:01.306 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:17:01.306 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:17:01.306 | 70.00th=[ 198], 80.00th=[ 221], 90.00th=[ 269], 95.00th=[ 285], 00:17:01.306 | 99.00th=[ 379], 99.50th=[ 408], 99.90th=[ 586], 99.95th=[ 652], 00:17:01.306 | 99.99th=[ 2737] 00:17:01.306 bw ( KiB/s): min=13432, max=20536, per=30.91%, avg=18622.67, stdev=2668.35, samples=6 00:17:01.306 iops : min= 3358, max= 5134, avg=4655.67, stdev=667.09, samples=6 00:17:01.306 lat (usec) : 100=0.01%, 250=84.75%, 500=15.09%, 750=0.11% 00:17:01.306 lat (msec) : 2=0.01%, 4=0.03% 00:17:01.306 cpu : usr=1.60%, sys=6.02%, ctx=14822, majf=0, minf=2 00:17:01.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:01.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.306 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.306 issued rwts: total=14796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:01.306 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=87101: Tue Nov 19 10:17:20 2024 00:17:01.306 read: IOPS=5178, BW=20.2MiB/s (21.2MB/s)(60.7MiB/3003msec) 00:17:01.306 slat (usec): min=13, max=134, avg=15.28, stdev= 2.46 00:17:01.306 clat (usec): min=135, max=1836, avg=176.38, stdev=29.53 00:17:01.306 lat (usec): min=151, max=1851, avg=191.66, stdev=29.77 00:17:01.306 clat percentiles (usec): 00:17:01.306 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:17:01.306 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:17:01.306 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 204], 95.00th=[ 221], 00:17:01.306 | 99.00th=[ 260], 99.50th=[ 281], 99.90th=[ 338], 99.95th=[ 392], 00:17:01.306 | 99.99th=[ 1516] 00:17:01.306 bw ( KiB/s): min=18656, max=21184, per=33.93%, avg=20440.00, stdev=1099.11, samples=5 00:17:01.306 iops : min= 4664, max= 5296, avg=5110.00, stdev=274.78, samples=5 00:17:01.306 lat (usec) : 250=98.54%, 500=1.43%, 1000=0.01% 00:17:01.306 lat (msec) : 2=0.01% 00:17:01.306 cpu : usr=1.40%, sys=6.36%, ctx=15553, majf=0, minf=1 00:17:01.306 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:01.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.306 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.306 issued rwts: total=15552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.306 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:01.306 00:17:01.306 Run status group 0 (all jobs): 00:17:01.306 READ: bw=58.8MiB/s (61.7MB/s), 13.3MiB/s-20.2MiB/s (14.0MB/s-21.2MB/s), io=228MiB (239MB), run=3003-3880msec 00:17:01.306 00:17:01.306 Disk stats (read/write): 00:17:01.307 nvme0n1: ios=11247/0, merge=0/0, ticks=3074/0, in_queue=3074, util=94.94% 00:17:01.307 nvme0n2: ios=15916/0, merge=0/0, ticks=3611/0, in_queue=3611, util=95.71% 00:17:01.307 nvme0n3: ios=14322/0, merge=0/0, ticks=2851/0, in_queue=2851, util=96.12% 00:17:01.307 nvme0n4: ios=14820/0, merge=0/0, ticks=2689/0, in_queue=2689, util=96.73% 00:17:01.565 10:17:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:01.565 10:17:21 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:01.843 10:17:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:01.843 10:17:21 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:02.102 10:17:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:02.102 10:17:21 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:02.668 10:17:21 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:02.668 10:17:21 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:02.926 10:17:22 -- target/fio.sh@69 -- # fio_status=0 00:17:02.926 10:17:22 -- target/fio.sh@70 -- # wait 87058 00:17:02.926 10:17:22 -- target/fio.sh@70 -- # fio_status=4 00:17:02.926 10:17:22 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:02.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.926 10:17:22 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:02.926 10:17:22 -- common/autotest_common.sh@1208 -- # local i=0 00:17:02.926 10:17:22 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.926 10:17:22 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:17:02.926 10:17:22 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:17:02.926 10:17:22 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:02.926 nvmf hotplug test: fio failed as expected 00:17:02.926 10:17:22 -- common/autotest_common.sh@1220 -- # return 0 00:17:02.926 10:17:22 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:02.926 10:17:22 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:02.926 10:17:22 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:03.186 10:17:22 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:03.186 10:17:22 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:03.186 10:17:22 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:03.186 10:17:22 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:03.186 10:17:22 -- target/fio.sh@91 -- # nvmftestfini 00:17:03.186 10:17:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:03.186 10:17:22 -- nvmf/common.sh@116 -- # sync 00:17:03.186 10:17:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:03.186 10:17:22 -- nvmf/common.sh@119 -- # set +e 00:17:03.186 10:17:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:03.186 10:17:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:03.186 rmmod nvme_tcp 00:17:03.186 rmmod nvme_fabrics 00:17:03.186 rmmod nvme_keyring 00:17:03.186 10:17:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:03.186 10:17:22 -- nvmf/common.sh@123 -- # set -e 00:17:03.186 10:17:22 -- nvmf/common.sh@124 -- # return 0 00:17:03.186 10:17:22 -- nvmf/common.sh@477 -- # '[' -n 86557 ']' 00:17:03.186 10:17:22 -- nvmf/common.sh@478 -- # killprocess 86557 00:17:03.186 10:17:22 -- common/autotest_common.sh@936 -- # '[' -z 86557 ']' 00:17:03.186 10:17:22 -- common/autotest_common.sh@940 -- # kill -0 86557 00:17:03.186 10:17:22 -- common/autotest_common.sh@941 -- # uname 00:17:03.186 10:17:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:03.186 10:17:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86557 00:17:03.186 10:17:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:03.186 10:17:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:03.186 10:17:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86557' 00:17:03.186 killing process with pid 86557 00:17:03.186 10:17:22 -- common/autotest_common.sh@955 -- # kill 86557 00:17:03.186 10:17:22 -- common/autotest_common.sh@960 -- # wait 86557 00:17:03.445 10:17:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:03.445 10:17:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:03.445 10:17:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:03.445 10:17:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:03.445 10:17:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:03.445 10:17:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.445 10:17:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.445 10:17:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.445 10:17:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:03.445 00:17:03.445 real 0m20.643s 00:17:03.445 user 1m19.805s 00:17:03.445 sys 0m9.495s 00:17:03.445 10:17:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:03.445 10:17:22 -- common/autotest_common.sh@10 -- # set +x 00:17:03.445 ************************************ 00:17:03.445 END TEST nvmf_fio_target 00:17:03.445 ************************************ 00:17:03.445 10:17:22 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:03.445 10:17:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:03.445 10:17:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:03.445 10:17:22 -- common/autotest_common.sh@10 -- # set +x 00:17:03.445 ************************************ 00:17:03.445 START TEST nvmf_bdevio 00:17:03.445 ************************************ 00:17:03.445 10:17:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:03.445 * Looking for test storage... 00:17:03.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:03.445 10:17:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:03.445 10:17:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:03.445 10:17:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:03.704 10:17:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:03.704 10:17:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:03.704 10:17:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:03.704 10:17:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:03.704 10:17:23 -- scripts/common.sh@335 -- # IFS=.-: 00:17:03.704 10:17:23 -- scripts/common.sh@335 -- # read -ra ver1 00:17:03.704 10:17:23 -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.704 10:17:23 -- scripts/common.sh@336 -- # read -ra ver2 00:17:03.704 10:17:23 -- scripts/common.sh@337 -- # local 'op=<' 00:17:03.704 10:17:23 -- scripts/common.sh@339 -- # ver1_l=2 00:17:03.704 10:17:23 -- scripts/common.sh@340 -- # ver2_l=1 00:17:03.704 10:17:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:03.704 10:17:23 -- scripts/common.sh@343 -- # case "$op" in 00:17:03.704 10:17:23 -- scripts/common.sh@344 -- # : 1 00:17:03.704 10:17:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:03.704 10:17:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.704 10:17:23 -- scripts/common.sh@364 -- # decimal 1 00:17:03.704 10:17:23 -- scripts/common.sh@352 -- # local d=1 00:17:03.704 10:17:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.704 10:17:23 -- scripts/common.sh@354 -- # echo 1 00:17:03.704 10:17:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:03.704 10:17:23 -- scripts/common.sh@365 -- # decimal 2 00:17:03.704 10:17:23 -- scripts/common.sh@352 -- # local d=2 00:17:03.704 10:17:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.704 10:17:23 -- scripts/common.sh@354 -- # echo 2 00:17:03.704 10:17:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:03.704 10:17:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:03.704 10:17:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:03.704 10:17:23 -- scripts/common.sh@367 -- # return 0 00:17:03.704 10:17:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.704 10:17:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:03.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.704 --rc genhtml_branch_coverage=1 00:17:03.704 --rc genhtml_function_coverage=1 00:17:03.704 --rc genhtml_legend=1 00:17:03.704 --rc geninfo_all_blocks=1 00:17:03.704 --rc geninfo_unexecuted_blocks=1 00:17:03.704 00:17:03.704 ' 00:17:03.704 10:17:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:03.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.704 --rc genhtml_branch_coverage=1 00:17:03.704 --rc genhtml_function_coverage=1 00:17:03.704 --rc genhtml_legend=1 00:17:03.704 --rc geninfo_all_blocks=1 00:17:03.704 --rc geninfo_unexecuted_blocks=1 00:17:03.704 00:17:03.704 ' 00:17:03.704 10:17:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:03.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.704 --rc genhtml_branch_coverage=1 00:17:03.704 --rc genhtml_function_coverage=1 00:17:03.704 --rc genhtml_legend=1 00:17:03.704 --rc geninfo_all_blocks=1 00:17:03.704 --rc geninfo_unexecuted_blocks=1 00:17:03.704 00:17:03.704 ' 00:17:03.704 10:17:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:03.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.704 --rc genhtml_branch_coverage=1 00:17:03.704 --rc genhtml_function_coverage=1 00:17:03.704 --rc genhtml_legend=1 00:17:03.704 --rc geninfo_all_blocks=1 00:17:03.704 --rc geninfo_unexecuted_blocks=1 00:17:03.704 00:17:03.704 ' 00:17:03.704 10:17:23 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:03.704 10:17:23 -- nvmf/common.sh@7 -- # uname -s 00:17:03.704 10:17:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.704 10:17:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.704 10:17:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.704 10:17:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.704 10:17:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.704 10:17:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.705 10:17:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.705 10:17:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.705 10:17:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.705 10:17:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.705 10:17:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:17:03.705 10:17:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:17:03.705 10:17:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.705 10:17:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.705 10:17:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:03.705 10:17:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:03.705 10:17:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.705 10:17:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.705 10:17:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.705 10:17:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.705 10:17:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.705 10:17:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.705 10:17:23 -- paths/export.sh@5 -- # export PATH 00:17:03.705 10:17:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.705 10:17:23 -- nvmf/common.sh@46 -- # : 0 00:17:03.705 10:17:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:03.705 10:17:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:03.705 10:17:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:03.705 10:17:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.705 10:17:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.705 10:17:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:03.705 10:17:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:03.705 10:17:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:03.705 10:17:23 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:03.705 10:17:23 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:03.705 10:17:23 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:03.705 10:17:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:03.705 10:17:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.705 10:17:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:03.705 10:17:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:03.705 10:17:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:03.705 10:17:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.705 10:17:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.705 10:17:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.705 10:17:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:03.705 10:17:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:03.705 10:17:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:03.705 10:17:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:03.705 10:17:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:03.705 10:17:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:03.705 10:17:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.705 10:17:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.705 10:17:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:03.705 10:17:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:03.705 10:17:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:03.705 10:17:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:03.705 10:17:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:03.705 10:17:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.705 10:17:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:03.705 10:17:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:03.705 10:17:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:03.705 10:17:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:03.705 10:17:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:03.705 10:17:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:03.705 Cannot find device "nvmf_tgt_br" 00:17:03.705 10:17:23 -- nvmf/common.sh@154 -- # true 00:17:03.705 10:17:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:03.705 Cannot find device "nvmf_tgt_br2" 00:17:03.705 10:17:23 -- nvmf/common.sh@155 -- # true 00:17:03.705 10:17:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:03.705 10:17:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:03.705 Cannot find device "nvmf_tgt_br" 00:17:03.705 10:17:23 -- nvmf/common.sh@157 -- # true 00:17:03.705 10:17:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:03.705 Cannot find device "nvmf_tgt_br2" 00:17:03.705 10:17:23 -- nvmf/common.sh@158 -- # true 00:17:03.705 10:17:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:03.705 10:17:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:03.705 10:17:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:03.705 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.705 10:17:23 -- nvmf/common.sh@161 -- # true 00:17:03.705 10:17:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:03.705 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:03.705 10:17:23 -- nvmf/common.sh@162 -- # true 00:17:03.705 10:17:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:03.705 10:17:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:03.705 10:17:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:03.705 10:17:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:03.705 10:17:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:03.964 10:17:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:03.964 10:17:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:03.964 10:17:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:03.964 10:17:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:03.964 10:17:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:03.964 10:17:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:03.964 10:17:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:03.964 10:17:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:03.964 10:17:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:03.964 10:17:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:03.964 10:17:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:03.964 10:17:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:03.964 10:17:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:03.964 10:17:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:03.964 10:17:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:03.964 10:17:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:03.964 10:17:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:03.964 10:17:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:03.964 10:17:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:03.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:17:03.964 00:17:03.964 --- 10.0.0.2 ping statistics --- 00:17:03.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.964 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:03.964 10:17:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:03.964 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:03.964 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:17:03.964 00:17:03.964 --- 10.0.0.3 ping statistics --- 00:17:03.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.964 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:03.964 10:17:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:03.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:17:03.964 00:17:03.964 --- 10.0.0.1 ping statistics --- 00:17:03.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.964 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:17:03.964 10:17:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.964 10:17:23 -- nvmf/common.sh@421 -- # return 0 00:17:03.964 10:17:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:03.964 10:17:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.964 10:17:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:03.964 10:17:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:03.964 10:17:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.964 10:17:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:03.964 10:17:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:03.964 10:17:23 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:03.964 10:17:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:03.964 10:17:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:03.964 10:17:23 -- common/autotest_common.sh@10 -- # set +x 00:17:03.964 10:17:23 -- nvmf/common.sh@469 -- # nvmfpid=87436 00:17:03.964 10:17:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:03.964 10:17:23 -- nvmf/common.sh@470 -- # waitforlisten 87436 00:17:03.964 10:17:23 -- common/autotest_common.sh@829 -- # '[' -z 87436 ']' 00:17:03.964 10:17:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.964 10:17:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.964 10:17:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.964 10:17:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.964 10:17:23 -- common/autotest_common.sh@10 -- # set +x 00:17:03.964 [2024-11-19 10:17:23.480376] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:03.964 [2024-11-19 10:17:23.480463] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.223 [2024-11-19 10:17:23.617682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:04.223 [2024-11-19 10:17:23.653332] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:04.223 [2024-11-19 10:17:23.653481] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.223 [2024-11-19 10:17:23.653494] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.223 [2024-11-19 10:17:23.653503] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.223 [2024-11-19 10:17:23.653647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:04.223 [2024-11-19 10:17:23.653976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:04.223 [2024-11-19 10:17:23.654077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:04.223 [2024-11-19 10:17:23.654081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:05.158 10:17:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.158 10:17:24 -- common/autotest_common.sh@862 -- # return 0 00:17:05.158 10:17:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:05.158 10:17:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:05.158 10:17:24 -- common/autotest_common.sh@10 -- # set +x 00:17:05.158 10:17:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.158 10:17:24 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:05.158 10:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.158 10:17:24 -- common/autotest_common.sh@10 -- # set +x 00:17:05.158 [2024-11-19 10:17:24.497275] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:05.158 10:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.158 10:17:24 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:05.158 10:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.158 10:17:24 -- common/autotest_common.sh@10 -- # set +x 00:17:05.158 Malloc0 00:17:05.158 10:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.158 10:17:24 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:05.158 10:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.158 10:17:24 -- common/autotest_common.sh@10 -- # set +x 00:17:05.158 10:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.158 10:17:24 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:05.158 10:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.158 10:17:24 -- common/autotest_common.sh@10 -- # set +x 00:17:05.158 10:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.158 10:17:24 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:05.158 10:17:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.158 10:17:24 -- common/autotest_common.sh@10 -- # set +x 00:17:05.158 [2024-11-19 10:17:24.559587] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.158 10:17:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.158 10:17:24 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:05.158 10:17:24 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:05.158 10:17:24 -- nvmf/common.sh@520 -- # config=() 00:17:05.158 10:17:24 -- nvmf/common.sh@520 -- # local subsystem config 00:17:05.158 10:17:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:05.158 10:17:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:05.158 { 00:17:05.158 "params": { 00:17:05.158 "name": "Nvme$subsystem", 00:17:05.158 "trtype": "$TEST_TRANSPORT", 00:17:05.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:05.158 "adrfam": "ipv4", 00:17:05.158 "trsvcid": "$NVMF_PORT", 00:17:05.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:05.158 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:05.158 "hdgst": ${hdgst:-false}, 00:17:05.158 "ddgst": ${ddgst:-false} 00:17:05.158 }, 00:17:05.158 "method": "bdev_nvme_attach_controller" 00:17:05.158 } 00:17:05.158 EOF 00:17:05.158 )") 00:17:05.158 10:17:24 -- nvmf/common.sh@542 -- # cat 00:17:05.158 10:17:24 -- nvmf/common.sh@544 -- # jq . 00:17:05.158 10:17:24 -- nvmf/common.sh@545 -- # IFS=, 00:17:05.158 10:17:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:05.158 "params": { 00:17:05.158 "name": "Nvme1", 00:17:05.158 "trtype": "tcp", 00:17:05.158 "traddr": "10.0.0.2", 00:17:05.158 "adrfam": "ipv4", 00:17:05.158 "trsvcid": "4420", 00:17:05.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:05.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:05.158 "hdgst": false, 00:17:05.158 "ddgst": false 00:17:05.158 }, 00:17:05.158 "method": "bdev_nvme_attach_controller" 00:17:05.158 }' 00:17:05.158 [2024-11-19 10:17:24.607676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:05.158 [2024-11-19 10:17:24.607764] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87492 ] 00:17:05.416 [2024-11-19 10:17:24.743946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:05.416 [2024-11-19 10:17:24.780939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:05.416 [2024-11-19 10:17:24.781020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:05.417 [2024-11-19 10:17:24.781029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.417 [2024-11-19 10:17:24.911747] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:05.417 [2024-11-19 10:17:24.911810] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:05.417 I/O targets: 00:17:05.417 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:05.417 00:17:05.417 00:17:05.417 CUnit - A unit testing framework for C - Version 2.1-3 00:17:05.417 http://cunit.sourceforge.net/ 00:17:05.417 00:17:05.417 00:17:05.417 Suite: bdevio tests on: Nvme1n1 00:17:05.675 Test: blockdev write read block ...passed 00:17:05.675 Test: blockdev write zeroes read block ...passed 00:17:05.675 Test: blockdev write zeroes read no split ...passed 00:17:05.675 Test: blockdev write zeroes read split ...passed 00:17:05.675 Test: blockdev write zeroes read split partial ...passed 00:17:05.675 Test: blockdev reset ...[2024-11-19 10:17:25.028644] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:05.675 [2024-11-19 10:17:25.028799] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce8ee0 (9): Bad file descriptor 00:17:05.675 [2024-11-19 10:17:25.041900] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:05.675 passed 00:17:05.675 Test: blockdev write read 8 blocks ...passed 00:17:05.675 Test: blockdev write read size > 128k ...passed 00:17:05.675 Test: blockdev write read invalid size ...passed 00:17:05.675 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:05.675 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:05.675 Test: blockdev write read max offset ...passed 00:17:05.675 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:05.675 Test: blockdev writev readv 8 blocks ...passed 00:17:05.675 Test: blockdev writev readv 30 x 1block ...passed 00:17:05.675 Test: blockdev writev readv block ...passed 00:17:05.675 Test: blockdev writev readv size > 128k ...passed 00:17:05.675 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:05.675 Test: blockdev comparev and writev ...[2024-11-19 10:17:25.215233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.675 [2024-11-19 10:17:25.215312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:05.675 [2024-11-19 10:17:25.215344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.675 [2024-11-19 10:17:25.215361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.675 [2024-11-19 10:17:25.215695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.675 [2024-11-19 10:17:25.215723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:05.675 [2024-11-19 10:17:25.215749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.675 [2024-11-19 10:17:25.215763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:05.675 [2024-11-19 10:17:25.216103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.675 [2024-11-19 10:17:25.216139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:05.675 [2024-11-19 10:17:25.216163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.675 [2024-11-19 10:17:25.216178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:05.675 [2024-11-19 10:17:25.216486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.675 [2024-11-19 10:17:25.216511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:05.675 [2024-11-19 10:17:25.216533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.675 [2024-11-19 10:17:25.216549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:05.933 passed 00:17:05.933 Test: blockdev nvme passthru rw ...passed 00:17:05.933 Test: blockdev nvme passthru vendor specific ...[2024-11-19 10:17:25.302594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:05.933 [2024-11-19 10:17:25.302990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:05.933 [2024-11-19 10:17:25.303350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:05.933 [2024-11-19 10:17:25.303630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:05.933 [2024-11-19 10:17:25.303970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:05.933 [2024-11-19 10:17:25.304165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 spassed 00:17:05.933 Test: blockdev nvme admin passthru ...qhd:002e p:0 m:0 dnr:0 00:17:05.933 [2024-11-19 10:17:25.304462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:05.933 [2024-11-19 10:17:25.304497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:05.933 passed 00:17:05.933 Test: blockdev copy ...passed 00:17:05.933 00:17:05.933 Run Summary: Type Total Ran Passed Failed Inactive 00:17:05.933 suites 1 1 n/a 0 0 00:17:05.933 tests 23 23 23 0 0 00:17:05.933 asserts 152 152 152 0 n/a 00:17:05.933 00:17:05.933 Elapsed time = 0.894 seconds 00:17:06.191 10:17:25 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:06.191 10:17:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.191 10:17:25 -- common/autotest_common.sh@10 -- # set +x 00:17:06.191 10:17:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.191 10:17:25 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:06.191 10:17:25 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:06.191 10:17:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:06.191 10:17:25 -- nvmf/common.sh@116 -- # sync 00:17:06.191 10:17:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:06.191 10:17:25 -- nvmf/common.sh@119 -- # set +e 00:17:06.191 10:17:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:06.191 10:17:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:06.191 rmmod nvme_tcp 00:17:06.191 rmmod nvme_fabrics 00:17:06.191 rmmod nvme_keyring 00:17:06.191 10:17:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:06.191 10:17:25 -- nvmf/common.sh@123 -- # set -e 00:17:06.191 10:17:25 -- nvmf/common.sh@124 -- # return 0 00:17:06.191 10:17:25 -- nvmf/common.sh@477 -- # '[' -n 87436 ']' 00:17:06.191 10:17:25 -- nvmf/common.sh@478 -- # killprocess 87436 00:17:06.191 10:17:25 -- common/autotest_common.sh@936 -- # '[' -z 87436 ']' 00:17:06.191 10:17:25 -- common/autotest_common.sh@940 -- # kill -0 87436 00:17:06.191 10:17:25 -- common/autotest_common.sh@941 -- # uname 00:17:06.191 10:17:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:06.191 10:17:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87436 00:17:06.191 killing process with pid 87436 00:17:06.191 10:17:25 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:06.191 10:17:25 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:06.191 10:17:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87436' 00:17:06.191 10:17:25 -- common/autotest_common.sh@955 -- # kill 87436 00:17:06.191 10:17:25 -- common/autotest_common.sh@960 -- # wait 87436 00:17:06.450 10:17:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:06.450 10:17:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:06.450 10:17:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:06.450 10:17:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.450 10:17:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:06.450 10:17:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.450 10:17:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.450 10:17:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.450 10:17:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:06.450 00:17:06.450 real 0m2.921s 00:17:06.450 user 0m10.391s 00:17:06.450 sys 0m0.661s 00:17:06.450 10:17:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:06.450 ************************************ 00:17:06.450 10:17:25 -- common/autotest_common.sh@10 -- # set +x 00:17:06.450 END TEST nvmf_bdevio 00:17:06.450 ************************************ 00:17:06.450 10:17:25 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:17:06.450 10:17:25 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:06.450 10:17:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:06.450 10:17:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:06.450 10:17:25 -- common/autotest_common.sh@10 -- # set +x 00:17:06.450 ************************************ 00:17:06.450 START TEST nvmf_bdevio_no_huge 00:17:06.450 ************************************ 00:17:06.450 10:17:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:06.450 * Looking for test storage... 00:17:06.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:06.450 10:17:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:06.450 10:17:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:06.450 10:17:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:06.709 10:17:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:06.709 10:17:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:06.709 10:17:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:06.709 10:17:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:06.709 10:17:26 -- scripts/common.sh@335 -- # IFS=.-: 00:17:06.709 10:17:26 -- scripts/common.sh@335 -- # read -ra ver1 00:17:06.709 10:17:26 -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.709 10:17:26 -- scripts/common.sh@336 -- # read -ra ver2 00:17:06.709 10:17:26 -- scripts/common.sh@337 -- # local 'op=<' 00:17:06.709 10:17:26 -- scripts/common.sh@339 -- # ver1_l=2 00:17:06.709 10:17:26 -- scripts/common.sh@340 -- # ver2_l=1 00:17:06.709 10:17:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:06.709 10:17:26 -- scripts/common.sh@343 -- # case "$op" in 00:17:06.709 10:17:26 -- scripts/common.sh@344 -- # : 1 00:17:06.709 10:17:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:06.709 10:17:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.709 10:17:26 -- scripts/common.sh@364 -- # decimal 1 00:17:06.709 10:17:26 -- scripts/common.sh@352 -- # local d=1 00:17:06.709 10:17:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.709 10:17:26 -- scripts/common.sh@354 -- # echo 1 00:17:06.709 10:17:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:06.709 10:17:26 -- scripts/common.sh@365 -- # decimal 2 00:17:06.709 10:17:26 -- scripts/common.sh@352 -- # local d=2 00:17:06.709 10:17:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.709 10:17:26 -- scripts/common.sh@354 -- # echo 2 00:17:06.709 10:17:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:06.709 10:17:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:06.709 10:17:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:06.709 10:17:26 -- scripts/common.sh@367 -- # return 0 00:17:06.709 10:17:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.709 10:17:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:06.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.709 --rc genhtml_branch_coverage=1 00:17:06.709 --rc genhtml_function_coverage=1 00:17:06.709 --rc genhtml_legend=1 00:17:06.709 --rc geninfo_all_blocks=1 00:17:06.709 --rc geninfo_unexecuted_blocks=1 00:17:06.709 00:17:06.709 ' 00:17:06.709 10:17:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:06.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.709 --rc genhtml_branch_coverage=1 00:17:06.709 --rc genhtml_function_coverage=1 00:17:06.709 --rc genhtml_legend=1 00:17:06.709 --rc geninfo_all_blocks=1 00:17:06.709 --rc geninfo_unexecuted_blocks=1 00:17:06.709 00:17:06.709 ' 00:17:06.709 10:17:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:06.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.709 --rc genhtml_branch_coverage=1 00:17:06.709 --rc genhtml_function_coverage=1 00:17:06.709 --rc genhtml_legend=1 00:17:06.709 --rc geninfo_all_blocks=1 00:17:06.709 --rc geninfo_unexecuted_blocks=1 00:17:06.709 00:17:06.709 ' 00:17:06.709 10:17:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:06.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.709 --rc genhtml_branch_coverage=1 00:17:06.709 --rc genhtml_function_coverage=1 00:17:06.709 --rc genhtml_legend=1 00:17:06.709 --rc geninfo_all_blocks=1 00:17:06.709 --rc geninfo_unexecuted_blocks=1 00:17:06.709 00:17:06.709 ' 00:17:06.709 10:17:26 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:06.709 10:17:26 -- nvmf/common.sh@7 -- # uname -s 00:17:06.709 10:17:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.709 10:17:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.709 10:17:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.709 10:17:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.709 10:17:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.709 10:17:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.709 10:17:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.709 10:17:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.709 10:17:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.709 10:17:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.709 10:17:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:17:06.709 10:17:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:17:06.709 10:17:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.709 10:17:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.709 10:17:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:06.709 10:17:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:06.709 10:17:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.709 10:17:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.709 10:17:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.709 10:17:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.709 10:17:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.709 10:17:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.709 10:17:26 -- paths/export.sh@5 -- # export PATH 00:17:06.709 10:17:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.709 10:17:26 -- nvmf/common.sh@46 -- # : 0 00:17:06.709 10:17:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:06.709 10:17:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:06.709 10:17:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:06.709 10:17:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.709 10:17:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.709 10:17:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:06.709 10:17:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:06.709 10:17:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:06.709 10:17:26 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:06.709 10:17:26 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:06.709 10:17:26 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:06.709 10:17:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:06.709 10:17:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.709 10:17:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:06.709 10:17:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:06.709 10:17:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:06.709 10:17:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.709 10:17:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.709 10:17:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.709 10:17:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:06.709 10:17:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:06.709 10:17:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:06.709 10:17:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:06.709 10:17:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:06.709 10:17:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:06.709 10:17:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.709 10:17:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.709 10:17:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:06.709 10:17:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:06.709 10:17:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:06.709 10:17:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:06.709 10:17:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:06.709 10:17:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.709 10:17:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:06.709 10:17:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:06.709 10:17:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:06.710 10:17:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:06.710 10:17:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:06.710 10:17:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:06.710 Cannot find device "nvmf_tgt_br" 00:17:06.710 10:17:26 -- nvmf/common.sh@154 -- # true 00:17:06.710 10:17:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:06.710 Cannot find device "nvmf_tgt_br2" 00:17:06.710 10:17:26 -- nvmf/common.sh@155 -- # true 00:17:06.710 10:17:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:06.710 10:17:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:06.710 Cannot find device "nvmf_tgt_br" 00:17:06.710 10:17:26 -- nvmf/common.sh@157 -- # true 00:17:06.710 10:17:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:06.710 Cannot find device "nvmf_tgt_br2" 00:17:06.710 10:17:26 -- nvmf/common.sh@158 -- # true 00:17:06.710 10:17:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:06.710 10:17:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:06.710 10:17:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:06.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:06.710 10:17:26 -- nvmf/common.sh@161 -- # true 00:17:06.710 10:17:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:06.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:06.710 10:17:26 -- nvmf/common.sh@162 -- # true 00:17:06.710 10:17:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:06.710 10:17:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:06.710 10:17:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:06.710 10:17:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:06.710 10:17:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:06.710 10:17:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:06.710 10:17:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:07.011 10:17:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:07.011 10:17:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:07.011 10:17:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:07.011 10:17:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:07.011 10:17:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:07.011 10:17:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:07.011 10:17:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:07.011 10:17:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:07.011 10:17:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:07.011 10:17:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:07.011 10:17:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:07.011 10:17:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:07.011 10:17:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:07.011 10:17:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:07.011 10:17:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:07.011 10:17:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:07.011 10:17:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:07.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:17:07.011 00:17:07.011 --- 10.0.0.2 ping statistics --- 00:17:07.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.011 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:07.011 10:17:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:07.011 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:07.011 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:17:07.011 00:17:07.011 --- 10.0.0.3 ping statistics --- 00:17:07.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.011 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:07.011 10:17:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:07.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:17:07.011 00:17:07.011 --- 10.0.0.1 ping statistics --- 00:17:07.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.011 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:07.011 10:17:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.011 10:17:26 -- nvmf/common.sh@421 -- # return 0 00:17:07.011 10:17:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:07.011 10:17:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.011 10:17:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:07.011 10:17:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:07.011 10:17:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.011 10:17:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:07.011 10:17:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:07.011 10:17:26 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:07.011 10:17:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:07.011 10:17:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:07.011 10:17:26 -- common/autotest_common.sh@10 -- # set +x 00:17:07.011 10:17:26 -- nvmf/common.sh@469 -- # nvmfpid=87678 00:17:07.011 10:17:26 -- nvmf/common.sh@470 -- # waitforlisten 87678 00:17:07.011 10:17:26 -- common/autotest_common.sh@829 -- # '[' -z 87678 ']' 00:17:07.011 10:17:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:07.011 10:17:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.011 10:17:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:07.011 10:17:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.011 10:17:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:07.011 10:17:26 -- common/autotest_common.sh@10 -- # set +x 00:17:07.011 [2024-11-19 10:17:26.475694] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:07.011 [2024-11-19 10:17:26.475809] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:07.269 [2024-11-19 10:17:26.622192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:07.269 [2024-11-19 10:17:26.736216] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:07.269 [2024-11-19 10:17:26.736435] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.269 [2024-11-19 10:17:26.736463] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.269 [2024-11-19 10:17:26.736479] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.269 [2024-11-19 10:17:26.736595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:07.269 [2024-11-19 10:17:26.737366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:07.269 [2024-11-19 10:17:26.737458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:07.269 [2024-11-19 10:17:26.737476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:08.203 10:17:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.203 10:17:27 -- common/autotest_common.sh@862 -- # return 0 00:17:08.203 10:17:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:08.203 10:17:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:08.203 10:17:27 -- common/autotest_common.sh@10 -- # set +x 00:17:08.203 10:17:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.203 10:17:27 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:08.203 10:17:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.203 10:17:27 -- common/autotest_common.sh@10 -- # set +x 00:17:08.203 [2024-11-19 10:17:27.553263] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.203 10:17:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.203 10:17:27 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:08.203 10:17:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.203 10:17:27 -- common/autotest_common.sh@10 -- # set +x 00:17:08.203 Malloc0 00:17:08.203 10:17:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.203 10:17:27 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:08.203 10:17:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.203 10:17:27 -- common/autotest_common.sh@10 -- # set +x 00:17:08.203 10:17:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.203 10:17:27 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:08.203 10:17:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.203 10:17:27 -- common/autotest_common.sh@10 -- # set +x 00:17:08.203 10:17:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.203 10:17:27 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:08.203 10:17:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.203 10:17:27 -- common/autotest_common.sh@10 -- # set +x 00:17:08.203 [2024-11-19 10:17:27.595743] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.203 10:17:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.203 10:17:27 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:08.203 10:17:27 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:08.203 10:17:27 -- nvmf/common.sh@520 -- # config=() 00:17:08.203 10:17:27 -- nvmf/common.sh@520 -- # local subsystem config 00:17:08.203 10:17:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:08.203 10:17:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:08.203 { 00:17:08.203 "params": { 00:17:08.203 "name": "Nvme$subsystem", 00:17:08.203 "trtype": "$TEST_TRANSPORT", 00:17:08.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.203 "adrfam": "ipv4", 00:17:08.203 "trsvcid": "$NVMF_PORT", 00:17:08.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.203 "hdgst": ${hdgst:-false}, 00:17:08.203 "ddgst": ${ddgst:-false} 00:17:08.203 }, 00:17:08.203 "method": "bdev_nvme_attach_controller" 00:17:08.203 } 00:17:08.203 EOF 00:17:08.203 )") 00:17:08.203 10:17:27 -- nvmf/common.sh@542 -- # cat 00:17:08.203 10:17:27 -- nvmf/common.sh@544 -- # jq . 00:17:08.203 10:17:27 -- nvmf/common.sh@545 -- # IFS=, 00:17:08.203 10:17:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:08.203 "params": { 00:17:08.203 "name": "Nvme1", 00:17:08.203 "trtype": "tcp", 00:17:08.203 "traddr": "10.0.0.2", 00:17:08.203 "adrfam": "ipv4", 00:17:08.203 "trsvcid": "4420", 00:17:08.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:08.203 "hdgst": false, 00:17:08.203 "ddgst": false 00:17:08.203 }, 00:17:08.203 "method": "bdev_nvme_attach_controller" 00:17:08.203 }' 00:17:08.203 [2024-11-19 10:17:27.653655] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:08.203 [2024-11-19 10:17:27.653999] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid87732 ] 00:17:08.461 [2024-11-19 10:17:27.797064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:08.461 [2024-11-19 10:17:27.904838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.461 [2024-11-19 10:17:27.904932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.461 [2024-11-19 10:17:27.904940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.721 [2024-11-19 10:17:28.065430] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:08.721 [2024-11-19 10:17:28.065972] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:08.721 I/O targets: 00:17:08.721 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:08.721 00:17:08.721 00:17:08.721 CUnit - A unit testing framework for C - Version 2.1-3 00:17:08.721 http://cunit.sourceforge.net/ 00:17:08.721 00:17:08.721 00:17:08.721 Suite: bdevio tests on: Nvme1n1 00:17:08.721 Test: blockdev write read block ...passed 00:17:08.721 Test: blockdev write zeroes read block ...passed 00:17:08.721 Test: blockdev write zeroes read no split ...passed 00:17:08.721 Test: blockdev write zeroes read split ...passed 00:17:08.721 Test: blockdev write zeroes read split partial ...passed 00:17:08.721 Test: blockdev reset ...[2024-11-19 10:17:28.200271] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:08.721 [2024-11-19 10:17:28.200719] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x867d10 (9): Bad file descriptor 00:17:08.721 [2024-11-19 10:17:28.214767] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:08.721 passed 00:17:08.721 Test: blockdev write read 8 blocks ...passed 00:17:08.721 Test: blockdev write read size > 128k ...passed 00:17:08.721 Test: blockdev write read invalid size ...passed 00:17:08.721 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:08.721 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:08.721 Test: blockdev write read max offset ...passed 00:17:08.980 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:08.980 Test: blockdev writev readv 8 blocks ...passed 00:17:08.980 Test: blockdev writev readv 30 x 1block ...passed 00:17:08.980 Test: blockdev writev readv block ...passed 00:17:08.980 Test: blockdev writev readv size > 128k ...passed 00:17:08.980 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:08.980 Test: blockdev comparev and writev ...[2024-11-19 10:17:28.392946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.980 [2024-11-19 10:17:28.392994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.980 [2024-11-19 10:17:28.393015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.980 [2024-11-19 10:17:28.393026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:08.980 [2024-11-19 10:17:28.393316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.980 [2024-11-19 10:17:28.393339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:08.980 [2024-11-19 10:17:28.393357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.980 [2024-11-19 10:17:28.393368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:08.980 [2024-11-19 10:17:28.393640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.980 [2024-11-19 10:17:28.393661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:08.980 [2024-11-19 10:17:28.393679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.980 [2024-11-19 10:17:28.393689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:08.980 [2024-11-19 10:17:28.393975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.980 [2024-11-19 10:17:28.393998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:08.980 [2024-11-19 10:17:28.394016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.980 [2024-11-19 10:17:28.394026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:08.980 passed 00:17:08.980 Test: blockdev nvme passthru rw ...passed 00:17:08.980 Test: blockdev nvme passthru vendor specific ...[2024-11-19 10:17:28.478189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.980 [2024-11-19 10:17:28.478237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:08.980 [2024-11-19 10:17:28.478374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.980 [2024-11-19 10:17:28.478390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:08.980 [2024-11-19 10:17:28.478498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.980 [2024-11-19 10:17:28.478520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:08.980 [2024-11-19 10:17:28.478633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.980 [2024-11-19 10:17:28.478654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:08.980 passed 00:17:08.980 Test: blockdev nvme admin passthru ...passed 00:17:09.239 Test: blockdev copy ...passed 00:17:09.239 00:17:09.239 Run Summary: Type Total Ran Passed Failed Inactive 00:17:09.239 suites 1 1 n/a 0 0 00:17:09.239 tests 23 23 23 0 0 00:17:09.239 asserts 152 152 152 0 n/a 00:17:09.239 00:17:09.239 Elapsed time = 0.915 seconds 00:17:09.497 10:17:28 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:09.497 10:17:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.497 10:17:28 -- common/autotest_common.sh@10 -- # set +x 00:17:09.497 10:17:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.497 10:17:28 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:09.497 10:17:28 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:09.497 10:17:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:09.497 10:17:28 -- nvmf/common.sh@116 -- # sync 00:17:09.756 10:17:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:09.756 10:17:29 -- nvmf/common.sh@119 -- # set +e 00:17:09.756 10:17:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:09.756 10:17:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:09.756 rmmod nvme_tcp 00:17:09.756 rmmod nvme_fabrics 00:17:09.756 rmmod nvme_keyring 00:17:09.756 10:17:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:09.756 10:17:29 -- nvmf/common.sh@123 -- # set -e 00:17:09.756 10:17:29 -- nvmf/common.sh@124 -- # return 0 00:17:09.756 10:17:29 -- nvmf/common.sh@477 -- # '[' -n 87678 ']' 00:17:09.756 10:17:29 -- nvmf/common.sh@478 -- # killprocess 87678 00:17:09.756 10:17:29 -- common/autotest_common.sh@936 -- # '[' -z 87678 ']' 00:17:09.756 10:17:29 -- common/autotest_common.sh@940 -- # kill -0 87678 00:17:09.756 10:17:29 -- common/autotest_common.sh@941 -- # uname 00:17:09.756 10:17:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:09.756 10:17:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87678 00:17:09.756 10:17:29 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:09.756 10:17:29 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:09.756 killing process with pid 87678 00:17:09.756 10:17:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87678' 00:17:09.756 10:17:29 -- common/autotest_common.sh@955 -- # kill 87678 00:17:09.756 10:17:29 -- common/autotest_common.sh@960 -- # wait 87678 00:17:10.013 10:17:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:10.013 10:17:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:10.013 10:17:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:10.272 10:17:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:10.272 10:17:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:10.272 10:17:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.272 10:17:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.272 10:17:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.272 10:17:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:10.272 00:17:10.272 real 0m3.728s 00:17:10.272 user 0m13.164s 00:17:10.272 sys 0m1.210s 00:17:10.272 10:17:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:10.272 10:17:29 -- common/autotest_common.sh@10 -- # set +x 00:17:10.272 ************************************ 00:17:10.272 END TEST nvmf_bdevio_no_huge 00:17:10.272 ************************************ 00:17:10.272 10:17:29 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:10.272 10:17:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:10.272 10:17:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:10.272 10:17:29 -- common/autotest_common.sh@10 -- # set +x 00:17:10.272 ************************************ 00:17:10.272 START TEST nvmf_tls 00:17:10.272 ************************************ 00:17:10.272 10:17:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:10.272 * Looking for test storage... 00:17:10.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:10.272 10:17:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:10.272 10:17:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:10.272 10:17:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:10.272 10:17:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:10.272 10:17:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:10.272 10:17:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:10.272 10:17:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:10.272 10:17:29 -- scripts/common.sh@335 -- # IFS=.-: 00:17:10.272 10:17:29 -- scripts/common.sh@335 -- # read -ra ver1 00:17:10.272 10:17:29 -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.272 10:17:29 -- scripts/common.sh@336 -- # read -ra ver2 00:17:10.272 10:17:29 -- scripts/common.sh@337 -- # local 'op=<' 00:17:10.272 10:17:29 -- scripts/common.sh@339 -- # ver1_l=2 00:17:10.272 10:17:29 -- scripts/common.sh@340 -- # ver2_l=1 00:17:10.272 10:17:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:10.272 10:17:29 -- scripts/common.sh@343 -- # case "$op" in 00:17:10.272 10:17:29 -- scripts/common.sh@344 -- # : 1 00:17:10.272 10:17:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:10.272 10:17:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.272 10:17:29 -- scripts/common.sh@364 -- # decimal 1 00:17:10.272 10:17:29 -- scripts/common.sh@352 -- # local d=1 00:17:10.272 10:17:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.272 10:17:29 -- scripts/common.sh@354 -- # echo 1 00:17:10.272 10:17:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:10.272 10:17:29 -- scripts/common.sh@365 -- # decimal 2 00:17:10.272 10:17:29 -- scripts/common.sh@352 -- # local d=2 00:17:10.272 10:17:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.272 10:17:29 -- scripts/common.sh@354 -- # echo 2 00:17:10.531 10:17:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:10.531 10:17:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:10.531 10:17:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:10.531 10:17:29 -- scripts/common.sh@367 -- # return 0 00:17:10.531 10:17:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.531 10:17:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:10.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.531 --rc genhtml_branch_coverage=1 00:17:10.531 --rc genhtml_function_coverage=1 00:17:10.531 --rc genhtml_legend=1 00:17:10.531 --rc geninfo_all_blocks=1 00:17:10.531 --rc geninfo_unexecuted_blocks=1 00:17:10.531 00:17:10.531 ' 00:17:10.531 10:17:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:10.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.531 --rc genhtml_branch_coverage=1 00:17:10.531 --rc genhtml_function_coverage=1 00:17:10.531 --rc genhtml_legend=1 00:17:10.531 --rc geninfo_all_blocks=1 00:17:10.531 --rc geninfo_unexecuted_blocks=1 00:17:10.531 00:17:10.531 ' 00:17:10.531 10:17:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:10.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.531 --rc genhtml_branch_coverage=1 00:17:10.531 --rc genhtml_function_coverage=1 00:17:10.531 --rc genhtml_legend=1 00:17:10.531 --rc geninfo_all_blocks=1 00:17:10.531 --rc geninfo_unexecuted_blocks=1 00:17:10.531 00:17:10.531 ' 00:17:10.531 10:17:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:10.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.531 --rc genhtml_branch_coverage=1 00:17:10.531 --rc genhtml_function_coverage=1 00:17:10.531 --rc genhtml_legend=1 00:17:10.531 --rc geninfo_all_blocks=1 00:17:10.531 --rc geninfo_unexecuted_blocks=1 00:17:10.531 00:17:10.531 ' 00:17:10.531 10:17:29 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:10.531 10:17:29 -- nvmf/common.sh@7 -- # uname -s 00:17:10.531 10:17:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.531 10:17:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.531 10:17:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.531 10:17:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.531 10:17:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.531 10:17:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.531 10:17:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.531 10:17:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.531 10:17:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.531 10:17:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.531 10:17:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:17:10.531 10:17:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:17:10.531 10:17:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.531 10:17:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.531 10:17:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:10.531 10:17:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:10.531 10:17:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.531 10:17:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.531 10:17:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.531 10:17:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.531 10:17:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.531 10:17:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.531 10:17:29 -- paths/export.sh@5 -- # export PATH 00:17:10.531 10:17:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.531 10:17:29 -- nvmf/common.sh@46 -- # : 0 00:17:10.531 10:17:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:10.531 10:17:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:10.531 10:17:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:10.531 10:17:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.531 10:17:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.531 10:17:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:10.531 10:17:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:10.531 10:17:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:10.531 10:17:29 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:10.531 10:17:29 -- target/tls.sh@71 -- # nvmftestinit 00:17:10.531 10:17:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:10.531 10:17:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.531 10:17:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:10.531 10:17:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:10.531 10:17:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:10.531 10:17:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.531 10:17:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.531 10:17:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.531 10:17:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:10.531 10:17:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:10.531 10:17:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:10.531 10:17:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:10.531 10:17:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:10.531 10:17:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:10.531 10:17:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.531 10:17:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.531 10:17:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:10.531 10:17:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:10.531 10:17:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:10.531 10:17:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:10.531 10:17:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:10.531 10:17:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.531 10:17:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:10.531 10:17:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:10.531 10:17:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:10.531 10:17:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:10.531 10:17:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:10.531 10:17:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:10.531 Cannot find device "nvmf_tgt_br" 00:17:10.531 10:17:29 -- nvmf/common.sh@154 -- # true 00:17:10.531 10:17:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:10.531 Cannot find device "nvmf_tgt_br2" 00:17:10.531 10:17:29 -- nvmf/common.sh@155 -- # true 00:17:10.531 10:17:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:10.531 10:17:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:10.531 Cannot find device "nvmf_tgt_br" 00:17:10.531 10:17:29 -- nvmf/common.sh@157 -- # true 00:17:10.531 10:17:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:10.531 Cannot find device "nvmf_tgt_br2" 00:17:10.531 10:17:29 -- nvmf/common.sh@158 -- # true 00:17:10.531 10:17:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:10.531 10:17:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:10.531 10:17:29 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:10.531 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.531 10:17:29 -- nvmf/common.sh@161 -- # true 00:17:10.531 10:17:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:10.531 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:10.531 10:17:29 -- nvmf/common.sh@162 -- # true 00:17:10.531 10:17:29 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:10.531 10:17:29 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:10.531 10:17:29 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:10.531 10:17:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:10.531 10:17:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:10.531 10:17:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:10.531 10:17:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:10.790 10:17:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:10.790 10:17:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:10.790 10:17:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:10.790 10:17:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:10.790 10:17:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:10.790 10:17:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:10.790 10:17:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:10.790 10:17:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:10.790 10:17:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:10.790 10:17:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:10.790 10:17:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:10.790 10:17:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:10.790 10:17:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:10.790 10:17:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:10.790 10:17:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:10.790 10:17:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:10.790 10:17:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:10.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:17:10.790 00:17:10.790 --- 10.0.0.2 ping statistics --- 00:17:10.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.790 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:17:10.790 10:17:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:10.790 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:10.790 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:17:10.790 00:17:10.790 --- 10.0.0.3 ping statistics --- 00:17:10.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.790 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:10.790 10:17:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:10.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:10.790 00:17:10.790 --- 10.0.0.1 ping statistics --- 00:17:10.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.790 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:10.790 10:17:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.790 10:17:30 -- nvmf/common.sh@421 -- # return 0 00:17:10.790 10:17:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:10.790 10:17:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.790 10:17:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:10.790 10:17:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:10.790 10:17:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.790 10:17:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:10.790 10:17:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:10.790 10:17:30 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:10.790 10:17:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:10.790 10:17:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:10.790 10:17:30 -- common/autotest_common.sh@10 -- # set +x 00:17:10.790 10:17:30 -- nvmf/common.sh@469 -- # nvmfpid=87919 00:17:10.790 10:17:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:10.790 10:17:30 -- nvmf/common.sh@470 -- # waitforlisten 87919 00:17:10.790 10:17:30 -- common/autotest_common.sh@829 -- # '[' -z 87919 ']' 00:17:10.790 10:17:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.790 10:17:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.790 10:17:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.790 10:17:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.790 10:17:30 -- common/autotest_common.sh@10 -- # set +x 00:17:10.790 [2024-11-19 10:17:30.273870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:10.790 [2024-11-19 10:17:30.273962] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.049 [2024-11-19 10:17:30.413222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.049 [2024-11-19 10:17:30.451036] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:11.049 [2024-11-19 10:17:30.451198] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.049 [2024-11-19 10:17:30.451213] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.049 [2024-11-19 10:17:30.451224] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.049 [2024-11-19 10:17:30.451260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.049 10:17:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.049 10:17:30 -- common/autotest_common.sh@862 -- # return 0 00:17:11.049 10:17:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:11.049 10:17:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:11.049 10:17:30 -- common/autotest_common.sh@10 -- # set +x 00:17:11.049 10:17:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.049 10:17:30 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:17:11.049 10:17:30 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:11.306 true 00:17:11.306 10:17:30 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:11.306 10:17:30 -- target/tls.sh@82 -- # jq -r .tls_version 00:17:11.873 10:17:31 -- target/tls.sh@82 -- # version=0 00:17:11.873 10:17:31 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:17:11.874 10:17:31 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:12.133 10:17:31 -- target/tls.sh@90 -- # jq -r .tls_version 00:17:12.133 10:17:31 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:12.391 10:17:31 -- target/tls.sh@90 -- # version=13 00:17:12.391 10:17:31 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:17:12.391 10:17:31 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:12.649 10:17:31 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:12.649 10:17:31 -- target/tls.sh@98 -- # jq -r .tls_version 00:17:12.907 10:17:32 -- target/tls.sh@98 -- # version=7 00:17:12.907 10:17:32 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:17:12.907 10:17:32 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:12.907 10:17:32 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:13.164 10:17:32 -- target/tls.sh@105 -- # ktls=false 00:17:13.164 10:17:32 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:17:13.164 10:17:32 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:13.423 10:17:32 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:13.423 10:17:32 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:13.702 10:17:33 -- target/tls.sh@113 -- # ktls=true 00:17:13.702 10:17:33 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:17:13.702 10:17:33 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:13.961 10:17:33 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:13.961 10:17:33 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:17:14.220 10:17:33 -- target/tls.sh@121 -- # ktls=false 00:17:14.220 10:17:33 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:17:14.220 10:17:33 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:17:14.220 10:17:33 -- target/tls.sh@49 -- # local key hash crc 00:17:14.220 10:17:33 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:17:14.220 10:17:33 -- target/tls.sh@51 -- # hash=01 00:17:14.220 10:17:33 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:17:14.220 10:17:33 -- target/tls.sh@52 -- # gzip -1 -c 00:17:14.220 10:17:33 -- target/tls.sh@52 -- # tail -c8 00:17:14.220 10:17:33 -- target/tls.sh@52 -- # head -c 4 00:17:14.220 10:17:33 -- target/tls.sh@52 -- # crc='p$H�' 00:17:14.220 10:17:33 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:14.220 10:17:33 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:17:14.220 10:17:33 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:14.220 10:17:33 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:14.220 10:17:33 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:17:14.220 10:17:33 -- target/tls.sh@49 -- # local key hash crc 00:17:14.220 10:17:33 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:17:14.220 10:17:33 -- target/tls.sh@51 -- # hash=01 00:17:14.220 10:17:33 -- target/tls.sh@52 -- # gzip -1 -c 00:17:14.220 10:17:33 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:17:14.220 10:17:33 -- target/tls.sh@52 -- # tail -c8 00:17:14.220 10:17:33 -- target/tls.sh@52 -- # head -c 4 00:17:14.220 10:17:33 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:17:14.481 10:17:33 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:14.481 10:17:33 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:17:14.481 10:17:33 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:14.481 10:17:33 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:14.481 10:17:33 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:14.481 10:17:33 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:14.481 10:17:33 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:14.481 10:17:33 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:14.481 10:17:33 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:14.481 10:17:33 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:14.481 10:17:33 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:14.739 10:17:34 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:14.997 10:17:34 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:14.997 10:17:34 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:14.997 10:17:34 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:15.256 [2024-11-19 10:17:34.750573] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.256 10:17:34 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:15.823 10:17:35 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:15.823 [2024-11-19 10:17:35.294718] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:15.823 [2024-11-19 10:17:35.294971] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.823 10:17:35 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:16.390 malloc0 00:17:16.390 10:17:35 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:16.648 10:17:35 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:16.907 10:17:36 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:29.113 Initializing NVMe Controllers 00:17:29.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:29.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:29.113 Initialization complete. Launching workers. 00:17:29.113 ======================================================== 00:17:29.113 Latency(us) 00:17:29.113 Device Information : IOPS MiB/s Average min max 00:17:29.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9943.36 38.84 6437.77 1907.87 10452.13 00:17:29.113 ======================================================== 00:17:29.113 Total : 9943.36 38.84 6437.77 1907.87 10452.13 00:17:29.113 00:17:29.113 10:17:46 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:29.113 10:17:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:29.113 10:17:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:29.113 10:17:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:29.113 10:17:46 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:29.113 10:17:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:29.113 10:17:46 -- target/tls.sh@28 -- # bdevperf_pid=88280 00:17:29.113 10:17:46 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:29.113 10:17:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:29.113 10:17:46 -- target/tls.sh@31 -- # waitforlisten 88280 /var/tmp/bdevperf.sock 00:17:29.113 10:17:46 -- common/autotest_common.sh@829 -- # '[' -z 88280 ']' 00:17:29.113 10:17:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.113 10:17:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.113 10:17:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.113 10:17:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.113 10:17:46 -- common/autotest_common.sh@10 -- # set +x 00:17:29.113 [2024-11-19 10:17:46.478003] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:29.113 [2024-11-19 10:17:46.478096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88280 ] 00:17:29.113 [2024-11-19 10:17:46.612324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.113 [2024-11-19 10:17:46.647775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.113 10:17:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.113 10:17:46 -- common/autotest_common.sh@862 -- # return 0 00:17:29.113 10:17:46 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:29.113 [2024-11-19 10:17:46.965973] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:29.113 TLSTESTn1 00:17:29.113 10:17:47 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:29.113 Running I/O for 10 seconds... 00:17:39.108 00:17:39.108 Latency(us) 00:17:39.108 [2024-11-19T10:17:58.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.108 [2024-11-19T10:17:58.654Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:39.108 Verification LBA range: start 0x0 length 0x2000 00:17:39.108 TLSTESTn1 : 10.02 5499.36 21.48 0.00 0.00 23236.51 5153.51 30980.65 00:17:39.108 [2024-11-19T10:17:58.654Z] =================================================================================================================== 00:17:39.108 [2024-11-19T10:17:58.654Z] Total : 5499.36 21.48 0.00 0.00 23236.51 5153.51 30980.65 00:17:39.108 0 00:17:39.108 10:17:57 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:39.109 10:17:57 -- target/tls.sh@45 -- # killprocess 88280 00:17:39.109 10:17:57 -- common/autotest_common.sh@936 -- # '[' -z 88280 ']' 00:17:39.109 10:17:57 -- common/autotest_common.sh@940 -- # kill -0 88280 00:17:39.109 10:17:57 -- common/autotest_common.sh@941 -- # uname 00:17:39.109 10:17:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:39.109 10:17:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88280 00:17:39.109 killing process with pid 88280 00:17:39.109 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.109 00:17:39.109 Latency(us) 00:17:39.109 [2024-11-19T10:17:58.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.109 [2024-11-19T10:17:58.655Z] =================================================================================================================== 00:17:39.109 [2024-11-19T10:17:58.655Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.109 10:17:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:39.109 10:17:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:39.109 10:17:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88280' 00:17:39.109 10:17:57 -- common/autotest_common.sh@955 -- # kill 88280 00:17:39.109 10:17:57 -- common/autotest_common.sh@960 -- # wait 88280 00:17:39.109 10:17:57 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:39.109 10:17:57 -- common/autotest_common.sh@650 -- # local es=0 00:17:39.109 10:17:57 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:39.109 10:17:57 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:39.109 10:17:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.109 10:17:57 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:39.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.109 10:17:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.109 10:17:57 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:39.109 10:17:57 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:39.109 10:17:57 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:39.109 10:17:57 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:39.109 10:17:57 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:39.109 10:17:57 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:39.109 10:17:57 -- target/tls.sh@28 -- # bdevperf_pid=88417 00:17:39.109 10:17:57 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:39.109 10:17:57 -- target/tls.sh@31 -- # waitforlisten 88417 /var/tmp/bdevperf.sock 00:17:39.109 10:17:57 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:39.109 10:17:57 -- common/autotest_common.sh@829 -- # '[' -z 88417 ']' 00:17:39.109 10:17:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.109 10:17:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.109 10:17:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.109 10:17:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.109 10:17:57 -- common/autotest_common.sh@10 -- # set +x 00:17:39.109 [2024-11-19 10:17:57.453290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:39.109 [2024-11-19 10:17:57.453433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88417 ] 00:17:39.109 [2024-11-19 10:17:57.600536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.109 [2024-11-19 10:17:57.647784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.109 10:17:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.109 10:17:57 -- common/autotest_common.sh@862 -- # return 0 00:17:39.109 10:17:57 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:39.109 [2024-11-19 10:17:58.047156] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:39.109 [2024-11-19 10:17:58.052123] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:39.109 [2024-11-19 10:17:58.052600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6847c0 (107): Transport endpoint is not connected 00:17:39.109 [2024-11-19 10:17:58.053587] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6847c0 (9): Bad file descriptor 00:17:39.109 [2024-11-19 10:17:58.054583] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:39.109 [2024-11-19 10:17:58.054601] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:39.109 [2024-11-19 10:17:58.054610] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:39.109 2024/11/19 10:17:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:39.109 request: 00:17:39.109 { 00:17:39.109 "method": "bdev_nvme_attach_controller", 00:17:39.109 "params": { 00:17:39.109 "name": "TLSTEST", 00:17:39.109 "trtype": "tcp", 00:17:39.109 "traddr": "10.0.0.2", 00:17:39.109 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:39.109 "adrfam": "ipv4", 00:17:39.109 "trsvcid": "4420", 00:17:39.109 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.109 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:39.109 } 00:17:39.109 } 00:17:39.109 Got JSON-RPC error response 00:17:39.109 GoRPCClient: error on JSON-RPC call 00:17:39.109 10:17:58 -- target/tls.sh@36 -- # killprocess 88417 00:17:39.109 10:17:58 -- common/autotest_common.sh@936 -- # '[' -z 88417 ']' 00:17:39.109 10:17:58 -- common/autotest_common.sh@940 -- # kill -0 88417 00:17:39.109 10:17:58 -- common/autotest_common.sh@941 -- # uname 00:17:39.109 10:17:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:39.109 10:17:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88417 00:17:39.109 killing process with pid 88417 00:17:39.109 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.109 00:17:39.109 Latency(us) 00:17:39.109 [2024-11-19T10:17:58.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.109 [2024-11-19T10:17:58.655Z] =================================================================================================================== 00:17:39.109 [2024-11-19T10:17:58.655Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:39.109 10:17:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:39.109 10:17:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:39.109 10:17:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88417' 00:17:39.109 10:17:58 -- common/autotest_common.sh@955 -- # kill 88417 00:17:39.109 10:17:58 -- common/autotest_common.sh@960 -- # wait 88417 00:17:39.109 10:17:58 -- target/tls.sh@37 -- # return 1 00:17:39.109 10:17:58 -- common/autotest_common.sh@653 -- # es=1 00:17:39.109 10:17:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:39.109 10:17:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:39.109 10:17:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:39.109 10:17:58 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:39.109 10:17:58 -- common/autotest_common.sh@650 -- # local es=0 00:17:39.109 10:17:58 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:39.109 10:17:58 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:39.109 10:17:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.109 10:17:58 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:39.109 10:17:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.109 10:17:58 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:39.109 10:17:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:39.109 10:17:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:39.109 10:17:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:39.109 10:17:58 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:39.109 10:17:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:39.109 10:17:58 -- target/tls.sh@28 -- # bdevperf_pid=88445 00:17:39.109 10:17:58 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:39.109 10:17:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:39.109 10:17:58 -- target/tls.sh@31 -- # waitforlisten 88445 /var/tmp/bdevperf.sock 00:17:39.109 10:17:58 -- common/autotest_common.sh@829 -- # '[' -z 88445 ']' 00:17:39.109 10:17:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.110 10:17:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.110 10:17:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.110 10:17:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.110 10:17:58 -- common/autotest_common.sh@10 -- # set +x 00:17:39.110 [2024-11-19 10:17:58.291696] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:39.110 [2024-11-19 10:17:58.291782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88445 ] 00:17:39.110 [2024-11-19 10:17:58.427499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.110 [2024-11-19 10:17:58.463305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.110 10:17:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:39.110 10:17:58 -- common/autotest_common.sh@862 -- # return 0 00:17:39.110 10:17:58 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:39.368 [2024-11-19 10:17:58.824970] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:39.368 [2024-11-19 10:17:58.832069] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:39.368 [2024-11-19 10:17:58.832106] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:39.368 [2024-11-19 10:17:58.832158] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:39.368 [2024-11-19 10:17:58.832464] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea7c0 (107): Transport endpoint is not connected 00:17:39.368 [2024-11-19 10:17:58.833446] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ea7c0 (9): Bad file descriptor 00:17:39.368 [2024-11-19 10:17:58.834442] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:39.368 [2024-11-19 10:17:58.834469] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:39.368 [2024-11-19 10:17:58.834480] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:39.368 2024/11/19 10:17:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:39.368 request: 00:17:39.368 { 00:17:39.368 "method": "bdev_nvme_attach_controller", 00:17:39.368 "params": { 00:17:39.369 "name": "TLSTEST", 00:17:39.369 "trtype": "tcp", 00:17:39.369 "traddr": "10.0.0.2", 00:17:39.369 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:39.369 "adrfam": "ipv4", 00:17:39.369 "trsvcid": "4420", 00:17:39.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.369 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:39.369 } 00:17:39.369 } 00:17:39.369 Got JSON-RPC error response 00:17:39.369 GoRPCClient: error on JSON-RPC call 00:17:39.369 10:17:58 -- target/tls.sh@36 -- # killprocess 88445 00:17:39.369 10:17:58 -- common/autotest_common.sh@936 -- # '[' -z 88445 ']' 00:17:39.369 10:17:58 -- common/autotest_common.sh@940 -- # kill -0 88445 00:17:39.369 10:17:58 -- common/autotest_common.sh@941 -- # uname 00:17:39.369 10:17:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:39.369 10:17:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88445 00:17:39.369 killing process with pid 88445 00:17:39.369 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.369 00:17:39.369 Latency(us) 00:17:39.369 [2024-11-19T10:17:58.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.369 [2024-11-19T10:17:58.915Z] =================================================================================================================== 00:17:39.369 [2024-11-19T10:17:58.915Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:39.369 10:17:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:39.369 10:17:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:39.369 10:17:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88445' 00:17:39.369 10:17:58 -- common/autotest_common.sh@955 -- # kill 88445 00:17:39.369 10:17:58 -- common/autotest_common.sh@960 -- # wait 88445 00:17:39.627 10:17:59 -- target/tls.sh@37 -- # return 1 00:17:39.627 10:17:59 -- common/autotest_common.sh@653 -- # es=1 00:17:39.627 10:17:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:39.627 10:17:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:39.627 10:17:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:39.627 10:17:59 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:39.627 10:17:59 -- common/autotest_common.sh@650 -- # local es=0 00:17:39.628 10:17:59 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:39.628 10:17:59 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:39.628 10:17:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.628 10:17:59 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:39.628 10:17:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.628 10:17:59 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:39.628 10:17:59 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:39.628 10:17:59 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:39.628 10:17:59 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:39.628 10:17:59 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:39.628 10:17:59 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:39.628 10:17:59 -- target/tls.sh@28 -- # bdevperf_pid=88477 00:17:39.628 10:17:59 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:39.628 10:17:59 -- target/tls.sh@31 -- # waitforlisten 88477 /var/tmp/bdevperf.sock 00:17:39.628 10:17:59 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:39.628 10:17:59 -- common/autotest_common.sh@829 -- # '[' -z 88477 ']' 00:17:39.628 10:17:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.628 10:17:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.628 10:17:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.628 10:17:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.628 10:17:59 -- common/autotest_common.sh@10 -- # set +x 00:17:39.628 [2024-11-19 10:17:59.078806] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:39.628 [2024-11-19 10:17:59.078907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88477 ] 00:17:39.885 [2024-11-19 10:17:59.213831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.885 [2024-11-19 10:17:59.250837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.821 10:18:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.821 10:18:00 -- common/autotest_common.sh@862 -- # return 0 00:17:40.821 10:18:00 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:40.821 [2024-11-19 10:18:00.289550] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.822 [2024-11-19 10:18:00.294373] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:40.822 [2024-11-19 10:18:00.294424] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:40.822 [2024-11-19 10:18:00.294491] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:40.822 [2024-11-19 10:18:00.295075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6fd7c0 (107): Transport endpoint is not connected 00:17:40.822 [2024-11-19 10:18:00.296061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6fd7c0 (9): Bad file descriptor 00:17:40.822 [2024-11-19 10:18:00.297056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:40.822 [2024-11-19 10:18:00.297076] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:40.822 [2024-11-19 10:18:00.297086] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:40.822 2024/11/19 10:18:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:40.822 request: 00:17:40.822 { 00:17:40.822 "method": "bdev_nvme_attach_controller", 00:17:40.822 "params": { 00:17:40.822 "name": "TLSTEST", 00:17:40.822 "trtype": "tcp", 00:17:40.822 "traddr": "10.0.0.2", 00:17:40.822 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.822 "adrfam": "ipv4", 00:17:40.822 "trsvcid": "4420", 00:17:40.822 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:40.822 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:40.822 } 00:17:40.822 } 00:17:40.822 Got JSON-RPC error response 00:17:40.822 GoRPCClient: error on JSON-RPC call 00:17:40.822 10:18:00 -- target/tls.sh@36 -- # killprocess 88477 00:17:40.822 10:18:00 -- common/autotest_common.sh@936 -- # '[' -z 88477 ']' 00:17:40.822 10:18:00 -- common/autotest_common.sh@940 -- # kill -0 88477 00:17:40.822 10:18:00 -- common/autotest_common.sh@941 -- # uname 00:17:40.822 10:18:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:40.822 10:18:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88477 00:17:40.822 killing process with pid 88477 00:17:40.822 Received shutdown signal, test time was about 10.000000 seconds 00:17:40.822 00:17:40.822 Latency(us) 00:17:40.822 [2024-11-19T10:18:00.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.822 [2024-11-19T10:18:00.368Z] =================================================================================================================== 00:17:40.822 [2024-11-19T10:18:00.368Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:40.822 10:18:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:40.822 10:18:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:40.822 10:18:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88477' 00:17:40.822 10:18:00 -- common/autotest_common.sh@955 -- # kill 88477 00:17:40.822 10:18:00 -- common/autotest_common.sh@960 -- # wait 88477 00:17:41.081 10:18:00 -- target/tls.sh@37 -- # return 1 00:17:41.081 10:18:00 -- common/autotest_common.sh@653 -- # es=1 00:17:41.081 10:18:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:41.081 10:18:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:41.081 10:18:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:41.081 10:18:00 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:41.081 10:18:00 -- common/autotest_common.sh@650 -- # local es=0 00:17:41.081 10:18:00 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:41.081 10:18:00 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:41.081 10:18:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.081 10:18:00 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:41.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.081 10:18:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.081 10:18:00 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:41.081 10:18:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:41.081 10:18:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:41.081 10:18:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:41.081 10:18:00 -- target/tls.sh@23 -- # psk= 00:17:41.081 10:18:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:41.081 10:18:00 -- target/tls.sh@28 -- # bdevperf_pid=88518 00:17:41.081 10:18:00 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:41.081 10:18:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:41.081 10:18:00 -- target/tls.sh@31 -- # waitforlisten 88518 /var/tmp/bdevperf.sock 00:17:41.081 10:18:00 -- common/autotest_common.sh@829 -- # '[' -z 88518 ']' 00:17:41.081 10:18:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.081 10:18:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.081 10:18:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.081 10:18:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.081 10:18:00 -- common/autotest_common.sh@10 -- # set +x 00:17:41.081 [2024-11-19 10:18:00.530201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:41.081 [2024-11-19 10:18:00.530298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88518 ] 00:17:41.416 [2024-11-19 10:18:00.664288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.416 [2024-11-19 10:18:00.699575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.416 10:18:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:41.416 10:18:00 -- common/autotest_common.sh@862 -- # return 0 00:17:41.416 10:18:00 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:41.674 [2024-11-19 10:18:01.063536] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:41.674 [2024-11-19 10:18:01.065481] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c2090 (9): Bad file descriptor 00:17:41.674 [2024-11-19 10:18:01.066476] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:41.674 [2024-11-19 10:18:01.066500] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:41.674 [2024-11-19 10:18:01.066511] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:41.674 2024/11/19 10:18:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:41.674 request: 00:17:41.674 { 00:17:41.674 "method": "bdev_nvme_attach_controller", 00:17:41.674 "params": { 00:17:41.674 "name": "TLSTEST", 00:17:41.674 "trtype": "tcp", 00:17:41.674 "traddr": "10.0.0.2", 00:17:41.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:41.674 "adrfam": "ipv4", 00:17:41.674 "trsvcid": "4420", 00:17:41.674 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:41.674 } 00:17:41.674 } 00:17:41.674 Got JSON-RPC error response 00:17:41.674 GoRPCClient: error on JSON-RPC call 00:17:41.674 10:18:01 -- target/tls.sh@36 -- # killprocess 88518 00:17:41.674 10:18:01 -- common/autotest_common.sh@936 -- # '[' -z 88518 ']' 00:17:41.674 10:18:01 -- common/autotest_common.sh@940 -- # kill -0 88518 00:17:41.674 10:18:01 -- common/autotest_common.sh@941 -- # uname 00:17:41.674 10:18:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.674 10:18:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88518 00:17:41.674 killing process with pid 88518 00:17:41.674 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.674 00:17:41.674 Latency(us) 00:17:41.674 [2024-11-19T10:18:01.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.674 [2024-11-19T10:18:01.220Z] =================================================================================================================== 00:17:41.674 [2024-11-19T10:18:01.220Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:41.674 10:18:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:41.674 10:18:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:41.674 10:18:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88518' 00:17:41.674 10:18:01 -- common/autotest_common.sh@955 -- # kill 88518 00:17:41.674 10:18:01 -- common/autotest_common.sh@960 -- # wait 88518 00:17:41.933 10:18:01 -- target/tls.sh@37 -- # return 1 00:17:41.933 10:18:01 -- common/autotest_common.sh@653 -- # es=1 00:17:41.933 10:18:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:41.933 10:18:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:41.933 10:18:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:41.933 10:18:01 -- target/tls.sh@167 -- # killprocess 87919 00:17:41.933 10:18:01 -- common/autotest_common.sh@936 -- # '[' -z 87919 ']' 00:17:41.933 10:18:01 -- common/autotest_common.sh@940 -- # kill -0 87919 00:17:41.933 10:18:01 -- common/autotest_common.sh@941 -- # uname 00:17:41.933 10:18:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.933 10:18:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87919 00:17:41.933 killing process with pid 87919 00:17:41.933 10:18:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:41.933 10:18:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:41.933 10:18:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87919' 00:17:41.933 10:18:01 -- common/autotest_common.sh@955 -- # kill 87919 00:17:41.934 10:18:01 -- common/autotest_common.sh@960 -- # wait 87919 00:17:41.934 10:18:01 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:41.934 10:18:01 -- target/tls.sh@49 -- # local key hash crc 00:17:41.934 10:18:01 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:41.934 10:18:01 -- target/tls.sh@51 -- # hash=02 00:17:41.934 10:18:01 -- target/tls.sh@52 -- # gzip -1 -c 00:17:41.934 10:18:01 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:41.934 10:18:01 -- target/tls.sh@52 -- # tail -c8 00:17:41.934 10:18:01 -- target/tls.sh@52 -- # head -c 4 00:17:41.934 10:18:01 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:41.934 10:18:01 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:41.934 10:18:01 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:41.934 10:18:01 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:41.934 10:18:01 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:41.934 10:18:01 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:41.934 10:18:01 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:41.934 10:18:01 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:41.934 10:18:01 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:41.934 10:18:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:41.934 10:18:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:41.934 10:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:41.934 10:18:01 -- nvmf/common.sh@469 -- # nvmfpid=88565 00:17:41.934 10:18:01 -- nvmf/common.sh@470 -- # waitforlisten 88565 00:17:41.934 10:18:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:41.934 10:18:01 -- common/autotest_common.sh@829 -- # '[' -z 88565 ']' 00:17:41.934 10:18:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.934 10:18:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:41.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.934 10:18:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.934 10:18:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:41.934 10:18:01 -- common/autotest_common.sh@10 -- # set +x 00:17:42.191 [2024-11-19 10:18:01.494619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:42.191 [2024-11-19 10:18:01.494712] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.191 [2024-11-19 10:18:01.632198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.191 [2024-11-19 10:18:01.670946] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:42.191 [2024-11-19 10:18:01.671146] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.191 [2024-11-19 10:18:01.671171] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.191 [2024-11-19 10:18:01.671182] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.191 [2024-11-19 10:18:01.671220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.127 10:18:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.127 10:18:02 -- common/autotest_common.sh@862 -- # return 0 00:17:43.127 10:18:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:43.127 10:18:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:43.127 10:18:02 -- common/autotest_common.sh@10 -- # set +x 00:17:43.127 10:18:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.127 10:18:02 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:43.128 10:18:02 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:43.128 10:18:02 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:43.387 [2024-11-19 10:18:02.879915] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.387 10:18:02 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:43.956 10:18:03 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:43.956 [2024-11-19 10:18:03.472060] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:43.956 [2024-11-19 10:18:03.472285] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.956 10:18:03 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:44.526 malloc0 00:17:44.526 10:18:03 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:44.526 10:18:04 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:44.784 10:18:04 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:44.784 10:18:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:44.784 10:18:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:44.784 10:18:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:44.784 10:18:04 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:44.784 10:18:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:44.785 10:18:04 -- target/tls.sh@28 -- # bdevperf_pid=88668 00:17:44.785 10:18:04 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:44.785 10:18:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:44.785 10:18:04 -- target/tls.sh@31 -- # waitforlisten 88668 /var/tmp/bdevperf.sock 00:17:44.785 10:18:04 -- common/autotest_common.sh@829 -- # '[' -z 88668 ']' 00:17:44.785 10:18:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.785 10:18:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.785 10:18:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.785 10:18:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.785 10:18:04 -- common/autotest_common.sh@10 -- # set +x 00:17:45.043 [2024-11-19 10:18:04.370462] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:45.043 [2024-11-19 10:18:04.370816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88668 ] 00:17:45.043 [2024-11-19 10:18:04.504494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.043 [2024-11-19 10:18:04.545132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.302 10:18:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.302 10:18:04 -- common/autotest_common.sh@862 -- # return 0 00:17:45.302 10:18:04 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:45.561 [2024-11-19 10:18:04.863246] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:45.561 TLSTESTn1 00:17:45.561 10:18:04 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:45.561 Running I/O for 10 seconds... 00:17:57.776 00:17:57.776 Latency(us) 00:17:57.776 [2024-11-19T10:18:17.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.776 [2024-11-19T10:18:17.322Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:57.776 Verification LBA range: start 0x0 length 0x2000 00:17:57.776 TLSTESTn1 : 10.01 5384.15 21.03 0.00 0.00 23734.70 5510.98 30742.34 00:17:57.776 [2024-11-19T10:18:17.322Z] =================================================================================================================== 00:17:57.776 [2024-11-19T10:18:17.322Z] Total : 5384.15 21.03 0.00 0.00 23734.70 5510.98 30742.34 00:17:57.776 0 00:17:57.776 10:18:15 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:57.776 10:18:15 -- target/tls.sh@45 -- # killprocess 88668 00:17:57.776 10:18:15 -- common/autotest_common.sh@936 -- # '[' -z 88668 ']' 00:17:57.776 10:18:15 -- common/autotest_common.sh@940 -- # kill -0 88668 00:17:57.776 10:18:15 -- common/autotest_common.sh@941 -- # uname 00:17:57.776 10:18:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.776 10:18:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88668 00:17:57.776 10:18:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:57.776 10:18:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:57.776 10:18:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88668' 00:17:57.776 killing process with pid 88668 00:17:57.776 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.776 00:17:57.776 Latency(us) 00:17:57.776 [2024-11-19T10:18:17.322Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.777 [2024-11-19T10:18:17.323Z] =================================================================================================================== 00:17:57.777 [2024-11-19T10:18:17.323Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:57.777 10:18:15 -- common/autotest_common.sh@955 -- # kill 88668 00:17:57.777 10:18:15 -- common/autotest_common.sh@960 -- # wait 88668 00:17:57.777 10:18:15 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:57.777 10:18:15 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:57.777 10:18:15 -- common/autotest_common.sh@650 -- # local es=0 00:17:57.777 10:18:15 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:57.777 10:18:15 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:57.777 10:18:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:57.777 10:18:15 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:57.777 10:18:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:57.777 10:18:15 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:57.777 10:18:15 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:57.777 10:18:15 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:57.777 10:18:15 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:57.777 10:18:15 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:57.777 10:18:15 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.777 10:18:15 -- target/tls.sh@28 -- # bdevperf_pid=88807 00:17:57.777 10:18:15 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:57.777 10:18:15 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.777 10:18:15 -- target/tls.sh@31 -- # waitforlisten 88807 /var/tmp/bdevperf.sock 00:17:57.777 10:18:15 -- common/autotest_common.sh@829 -- # '[' -z 88807 ']' 00:17:57.777 10:18:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.777 10:18:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.777 10:18:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.777 10:18:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.777 10:18:15 -- common/autotest_common.sh@10 -- # set +x 00:17:57.777 [2024-11-19 10:18:15.344201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:57.777 [2024-11-19 10:18:15.344294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88807 ] 00:17:57.777 [2024-11-19 10:18:15.479610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.777 [2024-11-19 10:18:15.515362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.777 10:18:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.777 10:18:16 -- common/autotest_common.sh@862 -- # return 0 00:17:57.777 10:18:16 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:57.777 [2024-11-19 10:18:16.538936] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.777 [2024-11-19 10:18:16.539008] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:57.777 2024/11/19 10:18:16 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:57.777 request: 00:17:57.777 { 00:17:57.777 "method": "bdev_nvme_attach_controller", 00:17:57.777 "params": { 00:17:57.777 "name": "TLSTEST", 00:17:57.777 "trtype": "tcp", 00:17:57.777 "traddr": "10.0.0.2", 00:17:57.777 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.777 "adrfam": "ipv4", 00:17:57.777 "trsvcid": "4420", 00:17:57.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.777 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:57.777 } 00:17:57.777 } 00:17:57.777 Got JSON-RPC error response 00:17:57.777 GoRPCClient: error on JSON-RPC call 00:17:57.777 10:18:16 -- target/tls.sh@36 -- # killprocess 88807 00:17:57.777 10:18:16 -- common/autotest_common.sh@936 -- # '[' -z 88807 ']' 00:17:57.777 10:18:16 -- common/autotest_common.sh@940 -- # kill -0 88807 00:17:57.777 10:18:16 -- common/autotest_common.sh@941 -- # uname 00:17:57.777 10:18:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.777 10:18:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88807 00:17:57.777 10:18:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:57.777 10:18:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:57.777 killing process with pid 88807 00:17:57.777 10:18:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88807' 00:17:57.777 10:18:16 -- common/autotest_common.sh@955 -- # kill 88807 00:17:57.777 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.777 00:17:57.777 Latency(us) 00:17:57.777 [2024-11-19T10:18:17.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.777 [2024-11-19T10:18:17.323Z] =================================================================================================================== 00:17:57.777 [2024-11-19T10:18:17.323Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.777 10:18:16 -- common/autotest_common.sh@960 -- # wait 88807 00:17:57.777 10:18:16 -- target/tls.sh@37 -- # return 1 00:17:57.777 10:18:16 -- common/autotest_common.sh@653 -- # es=1 00:17:57.777 10:18:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:57.777 10:18:16 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:57.777 10:18:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:57.777 10:18:16 -- target/tls.sh@183 -- # killprocess 88565 00:17:57.777 10:18:16 -- common/autotest_common.sh@936 -- # '[' -z 88565 ']' 00:17:57.777 10:18:16 -- common/autotest_common.sh@940 -- # kill -0 88565 00:17:57.777 10:18:16 -- common/autotest_common.sh@941 -- # uname 00:17:57.777 10:18:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.777 10:18:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88565 00:17:57.777 10:18:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:57.777 10:18:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:57.777 killing process with pid 88565 00:17:57.777 10:18:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88565' 00:17:57.777 10:18:16 -- common/autotest_common.sh@955 -- # kill 88565 00:17:57.777 10:18:16 -- common/autotest_common.sh@960 -- # wait 88565 00:17:57.777 10:18:16 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:57.777 10:18:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:57.777 10:18:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:57.777 10:18:16 -- common/autotest_common.sh@10 -- # set +x 00:17:57.777 10:18:16 -- nvmf/common.sh@469 -- # nvmfpid=88852 00:17:57.777 10:18:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:57.777 10:18:16 -- nvmf/common.sh@470 -- # waitforlisten 88852 00:17:57.777 10:18:16 -- common/autotest_common.sh@829 -- # '[' -z 88852 ']' 00:17:57.777 10:18:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.777 10:18:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.777 10:18:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.777 10:18:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.777 10:18:16 -- common/autotest_common.sh@10 -- # set +x 00:17:57.777 [2024-11-19 10:18:16.976984] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:57.777 [2024-11-19 10:18:16.977104] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.777 [2024-11-19 10:18:17.119031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.777 [2024-11-19 10:18:17.152094] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:57.777 [2024-11-19 10:18:17.152236] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.777 [2024-11-19 10:18:17.152251] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.777 [2024-11-19 10:18:17.152260] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.777 [2024-11-19 10:18:17.152285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.712 10:18:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.712 10:18:17 -- common/autotest_common.sh@862 -- # return 0 00:17:58.712 10:18:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:58.712 10:18:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:58.712 10:18:17 -- common/autotest_common.sh@10 -- # set +x 00:17:58.712 10:18:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.712 10:18:18 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:58.712 10:18:18 -- common/autotest_common.sh@650 -- # local es=0 00:17:58.712 10:18:18 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:58.712 10:18:18 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:58.712 10:18:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.712 10:18:18 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:58.712 10:18:18 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.712 10:18:18 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:58.712 10:18:18 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:58.712 10:18:18 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:58.970 [2024-11-19 10:18:18.351552] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:58.970 10:18:18 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:59.228 10:18:18 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:59.486 [2024-11-19 10:18:18.911702] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:59.486 [2024-11-19 10:18:18.911946] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.486 10:18:18 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:59.745 malloc0 00:17:59.745 10:18:19 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:00.311 10:18:19 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:00.618 [2024-11-19 10:18:19.866606] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:00.618 [2024-11-19 10:18:19.866652] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:00.618 [2024-11-19 10:18:19.866672] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:18:00.618 2024/11/19 10:18:19 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:18:00.618 request: 00:18:00.618 { 00:18:00.618 "method": "nvmf_subsystem_add_host", 00:18:00.618 "params": { 00:18:00.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.618 "host": "nqn.2016-06.io.spdk:host1", 00:18:00.618 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:00.618 } 00:18:00.618 } 00:18:00.618 Got JSON-RPC error response 00:18:00.618 GoRPCClient: error on JSON-RPC call 00:18:00.618 10:18:19 -- common/autotest_common.sh@653 -- # es=1 00:18:00.618 10:18:19 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:00.618 10:18:19 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:00.618 10:18:19 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:00.618 10:18:19 -- target/tls.sh@189 -- # killprocess 88852 00:18:00.618 10:18:19 -- common/autotest_common.sh@936 -- # '[' -z 88852 ']' 00:18:00.618 10:18:19 -- common/autotest_common.sh@940 -- # kill -0 88852 00:18:00.618 10:18:19 -- common/autotest_common.sh@941 -- # uname 00:18:00.618 10:18:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.618 10:18:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88852 00:18:00.618 10:18:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:00.618 10:18:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:00.618 killing process with pid 88852 00:18:00.618 10:18:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88852' 00:18:00.618 10:18:19 -- common/autotest_common.sh@955 -- # kill 88852 00:18:00.618 10:18:19 -- common/autotest_common.sh@960 -- # wait 88852 00:18:00.618 10:18:20 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:00.618 10:18:20 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:18:00.618 10:18:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:00.618 10:18:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:00.618 10:18:20 -- common/autotest_common.sh@10 -- # set +x 00:18:00.618 10:18:20 -- nvmf/common.sh@469 -- # nvmfpid=88968 00:18:00.618 10:18:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:00.618 10:18:20 -- nvmf/common.sh@470 -- # waitforlisten 88968 00:18:00.618 10:18:20 -- common/autotest_common.sh@829 -- # '[' -z 88968 ']' 00:18:00.618 10:18:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.618 10:18:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.618 10:18:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.618 10:18:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.618 10:18:20 -- common/autotest_common.sh@10 -- # set +x 00:18:00.618 [2024-11-19 10:18:20.126325] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:00.618 [2024-11-19 10:18:20.126418] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.878 [2024-11-19 10:18:20.261927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.878 [2024-11-19 10:18:20.296280] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:00.878 [2024-11-19 10:18:20.296435] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.878 [2024-11-19 10:18:20.296449] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.878 [2024-11-19 10:18:20.296458] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.878 [2024-11-19 10:18:20.296485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.878 10:18:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.878 10:18:20 -- common/autotest_common.sh@862 -- # return 0 00:18:00.878 10:18:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:00.878 10:18:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:00.878 10:18:20 -- common/autotest_common.sh@10 -- # set +x 00:18:01.138 10:18:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.138 10:18:20 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:01.138 10:18:20 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:01.138 10:18:20 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:01.397 [2024-11-19 10:18:20.746517] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.397 10:18:20 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:01.656 10:18:21 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:01.914 [2024-11-19 10:18:21.326660] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:01.914 [2024-11-19 10:18:21.326909] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.914 10:18:21 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:02.173 malloc0 00:18:02.173 10:18:21 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:02.432 10:18:21 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:02.690 10:18:22 -- target/tls.sh@197 -- # bdevperf_pid=89058 00:18:02.690 10:18:22 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.690 10:18:22 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.690 10:18:22 -- target/tls.sh@200 -- # waitforlisten 89058 /var/tmp/bdevperf.sock 00:18:02.690 10:18:22 -- common/autotest_common.sh@829 -- # '[' -z 89058 ']' 00:18:02.690 10:18:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.690 10:18:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.690 10:18:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.690 10:18:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.690 10:18:22 -- common/autotest_common.sh@10 -- # set +x 00:18:02.690 [2024-11-19 10:18:22.212911] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:02.690 [2024-11-19 10:18:22.213000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89058 ] 00:18:02.949 [2024-11-19 10:18:22.346107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.949 [2024-11-19 10:18:22.385911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.207 10:18:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.207 10:18:22 -- common/autotest_common.sh@862 -- # return 0 00:18:03.207 10:18:22 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:03.466 [2024-11-19 10:18:22.801610] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.466 TLSTESTn1 00:18:03.466 10:18:22 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:03.724 10:18:23 -- target/tls.sh@205 -- # tgtconf='{ 00:18:03.724 "subsystems": [ 00:18:03.724 { 00:18:03.724 "subsystem": "iobuf", 00:18:03.724 "config": [ 00:18:03.724 { 00:18:03.724 "method": "iobuf_set_options", 00:18:03.725 "params": { 00:18:03.725 "large_bufsize": 135168, 00:18:03.725 "large_pool_count": 1024, 00:18:03.725 "small_bufsize": 8192, 00:18:03.725 "small_pool_count": 8192 00:18:03.725 } 00:18:03.725 } 00:18:03.725 ] 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "subsystem": "sock", 00:18:03.725 "config": [ 00:18:03.725 { 00:18:03.725 "method": "sock_impl_set_options", 00:18:03.725 "params": { 00:18:03.725 "enable_ktls": false, 00:18:03.725 "enable_placement_id": 0, 00:18:03.725 "enable_quickack": false, 00:18:03.725 "enable_recv_pipe": true, 00:18:03.725 "enable_zerocopy_send_client": false, 00:18:03.725 "enable_zerocopy_send_server": true, 00:18:03.725 "impl_name": "posix", 00:18:03.725 "recv_buf_size": 2097152, 00:18:03.725 "send_buf_size": 2097152, 00:18:03.725 "tls_version": 0, 00:18:03.725 "zerocopy_threshold": 0 00:18:03.725 } 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "method": "sock_impl_set_options", 00:18:03.725 "params": { 00:18:03.725 "enable_ktls": false, 00:18:03.725 "enable_placement_id": 0, 00:18:03.725 "enable_quickack": false, 00:18:03.725 "enable_recv_pipe": true, 00:18:03.725 "enable_zerocopy_send_client": false, 00:18:03.725 "enable_zerocopy_send_server": true, 00:18:03.725 "impl_name": "ssl", 00:18:03.725 "recv_buf_size": 4096, 00:18:03.725 "send_buf_size": 4096, 00:18:03.725 "tls_version": 0, 00:18:03.725 "zerocopy_threshold": 0 00:18:03.725 } 00:18:03.725 } 00:18:03.725 ] 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "subsystem": "vmd", 00:18:03.725 "config": [] 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "subsystem": "accel", 00:18:03.725 "config": [ 00:18:03.725 { 00:18:03.725 "method": "accel_set_options", 00:18:03.725 "params": { 00:18:03.725 "buf_count": 2048, 00:18:03.725 "large_cache_size": 16, 00:18:03.725 "sequence_count": 2048, 00:18:03.725 "small_cache_size": 128, 00:18:03.725 "task_count": 2048 00:18:03.725 } 00:18:03.725 } 00:18:03.725 ] 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "subsystem": "bdev", 00:18:03.725 "config": [ 00:18:03.725 { 00:18:03.725 "method": "bdev_set_options", 00:18:03.725 "params": { 00:18:03.725 "bdev_auto_examine": true, 00:18:03.725 "bdev_io_cache_size": 256, 00:18:03.725 "bdev_io_pool_size": 65535, 00:18:03.725 "iobuf_large_cache_size": 16, 00:18:03.725 "iobuf_small_cache_size": 128 00:18:03.725 } 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "method": "bdev_raid_set_options", 00:18:03.725 "params": { 00:18:03.725 "process_window_size_kb": 1024 00:18:03.725 } 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "method": "bdev_iscsi_set_options", 00:18:03.725 "params": { 00:18:03.725 "timeout_sec": 30 00:18:03.725 } 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "method": "bdev_nvme_set_options", 00:18:03.725 "params": { 00:18:03.725 "action_on_timeout": "none", 00:18:03.725 "allow_accel_sequence": false, 00:18:03.725 "arbitration_burst": 0, 00:18:03.725 "bdev_retry_count": 3, 00:18:03.725 "ctrlr_loss_timeout_sec": 0, 00:18:03.725 "delay_cmd_submit": true, 00:18:03.725 "fast_io_fail_timeout_sec": 0, 00:18:03.725 "generate_uuids": false, 00:18:03.725 "high_priority_weight": 0, 00:18:03.725 "io_path_stat": false, 00:18:03.725 "io_queue_requests": 0, 00:18:03.725 "keep_alive_timeout_ms": 10000, 00:18:03.725 "low_priority_weight": 0, 00:18:03.725 "medium_priority_weight": 0, 00:18:03.725 "nvme_adminq_poll_period_us": 10000, 00:18:03.725 "nvme_ioq_poll_period_us": 0, 00:18:03.725 "reconnect_delay_sec": 0, 00:18:03.725 "timeout_admin_us": 0, 00:18:03.725 "timeout_us": 0, 00:18:03.725 "transport_ack_timeout": 0, 00:18:03.725 "transport_retry_count": 4, 00:18:03.725 "transport_tos": 0 00:18:03.725 } 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "method": "bdev_nvme_set_hotplug", 00:18:03.725 "params": { 00:18:03.725 "enable": false, 00:18:03.725 "period_us": 100000 00:18:03.725 } 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "method": "bdev_malloc_create", 00:18:03.725 "params": { 00:18:03.725 "block_size": 4096, 00:18:03.725 "name": "malloc0", 00:18:03.725 "num_blocks": 8192, 00:18:03.725 "optimal_io_boundary": 0, 00:18:03.725 "physical_block_size": 4096, 00:18:03.725 "uuid": "d7aac495-79dd-428c-bd31-05d8cc992161" 00:18:03.725 } 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "method": "bdev_wait_for_examine" 00:18:03.725 } 00:18:03.725 ] 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "subsystem": "nbd", 00:18:03.725 "config": [] 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "subsystem": "scheduler", 00:18:03.725 "config": [ 00:18:03.725 { 00:18:03.725 "method": "framework_set_scheduler", 00:18:03.725 "params": { 00:18:03.725 "name": "static" 00:18:03.725 } 00:18:03.725 } 00:18:03.725 ] 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "subsystem": "nvmf", 00:18:03.725 "config": [ 00:18:03.725 { 00:18:03.725 "method": "nvmf_set_config", 00:18:03.725 "params": { 00:18:03.725 "admin_cmd_passthru": { 00:18:03.725 "identify_ctrlr": false 00:18:03.725 }, 00:18:03.725 "discovery_filter": "match_any" 00:18:03.725 } 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "method": "nvmf_set_max_subsystems", 00:18:03.725 "params": { 00:18:03.725 "max_subsystems": 1024 00:18:03.725 } 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "method": "nvmf_set_crdt", 00:18:03.725 "params": { 00:18:03.725 "crdt1": 0, 00:18:03.725 "crdt2": 0, 00:18:03.725 "crdt3": 0 00:18:03.725 } 00:18:03.725 }, 00:18:03.725 { 00:18:03.725 "method": "nvmf_create_transport", 00:18:03.725 "params": { 00:18:03.725 "abort_timeout_sec": 1, 00:18:03.725 "buf_cache_size": 4294967295, 00:18:03.725 "c2h_success": false, 00:18:03.725 "dif_insert_or_strip": false, 00:18:03.725 "in_capsule_data_size": 4096, 00:18:03.725 "io_unit_size": 131072, 00:18:03.725 "max_aq_depth": 128, 00:18:03.725 "max_io_qpairs_per_ctrlr": 127, 00:18:03.725 "max_io_size": 131072, 00:18:03.725 "max_queue_depth": 128, 00:18:03.725 "num_shared_buffers": 511, 00:18:03.725 "sock_priority": 0, 00:18:03.725 "trtype": "TCP", 00:18:03.725 "zcopy": false 00:18:03.726 } 00:18:03.726 }, 00:18:03.726 { 00:18:03.726 "method": "nvmf_create_subsystem", 00:18:03.726 "params": { 00:18:03.726 "allow_any_host": false, 00:18:03.726 "ana_reporting": false, 00:18:03.726 "max_cntlid": 65519, 00:18:03.726 "max_namespaces": 10, 00:18:03.726 "min_cntlid": 1, 00:18:03.726 "model_number": "SPDK bdev Controller", 00:18:03.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.726 "serial_number": "SPDK00000000000001" 00:18:03.726 } 00:18:03.726 }, 00:18:03.726 { 00:18:03.726 "method": "nvmf_subsystem_add_host", 00:18:03.726 "params": { 00:18:03.726 "host": "nqn.2016-06.io.spdk:host1", 00:18:03.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.726 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:03.726 } 00:18:03.726 }, 00:18:03.726 { 00:18:03.726 "method": "nvmf_subsystem_add_ns", 00:18:03.726 "params": { 00:18:03.726 "namespace": { 00:18:03.726 "bdev_name": "malloc0", 00:18:03.726 "nguid": "D7AAC49579DD428CBD3105D8CC992161", 00:18:03.726 "nsid": 1, 00:18:03.726 "uuid": "d7aac495-79dd-428c-bd31-05d8cc992161" 00:18:03.726 }, 00:18:03.726 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:03.726 } 00:18:03.726 }, 00:18:03.726 { 00:18:03.726 "method": "nvmf_subsystem_add_listener", 00:18:03.726 "params": { 00:18:03.726 "listen_address": { 00:18:03.726 "adrfam": "IPv4", 00:18:03.726 "traddr": "10.0.0.2", 00:18:03.726 "trsvcid": "4420", 00:18:03.726 "trtype": "TCP" 00:18:03.726 }, 00:18:03.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.726 "secure_channel": true 00:18:03.726 } 00:18:03.726 } 00:18:03.726 ] 00:18:03.726 } 00:18:03.726 ] 00:18:03.726 }' 00:18:03.726 10:18:23 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:04.294 10:18:23 -- target/tls.sh@206 -- # bdevperfconf='{ 00:18:04.294 "subsystems": [ 00:18:04.294 { 00:18:04.294 "subsystem": "iobuf", 00:18:04.294 "config": [ 00:18:04.294 { 00:18:04.294 "method": "iobuf_set_options", 00:18:04.294 "params": { 00:18:04.294 "large_bufsize": 135168, 00:18:04.294 "large_pool_count": 1024, 00:18:04.294 "small_bufsize": 8192, 00:18:04.294 "small_pool_count": 8192 00:18:04.294 } 00:18:04.294 } 00:18:04.294 ] 00:18:04.294 }, 00:18:04.294 { 00:18:04.294 "subsystem": "sock", 00:18:04.294 "config": [ 00:18:04.294 { 00:18:04.294 "method": "sock_impl_set_options", 00:18:04.294 "params": { 00:18:04.294 "enable_ktls": false, 00:18:04.294 "enable_placement_id": 0, 00:18:04.294 "enable_quickack": false, 00:18:04.294 "enable_recv_pipe": true, 00:18:04.294 "enable_zerocopy_send_client": false, 00:18:04.294 "enable_zerocopy_send_server": true, 00:18:04.294 "impl_name": "posix", 00:18:04.294 "recv_buf_size": 2097152, 00:18:04.294 "send_buf_size": 2097152, 00:18:04.294 "tls_version": 0, 00:18:04.294 "zerocopy_threshold": 0 00:18:04.294 } 00:18:04.294 }, 00:18:04.294 { 00:18:04.294 "method": "sock_impl_set_options", 00:18:04.294 "params": { 00:18:04.294 "enable_ktls": false, 00:18:04.294 "enable_placement_id": 0, 00:18:04.294 "enable_quickack": false, 00:18:04.294 "enable_recv_pipe": true, 00:18:04.294 "enable_zerocopy_send_client": false, 00:18:04.294 "enable_zerocopy_send_server": true, 00:18:04.294 "impl_name": "ssl", 00:18:04.294 "recv_buf_size": 4096, 00:18:04.294 "send_buf_size": 4096, 00:18:04.294 "tls_version": 0, 00:18:04.294 "zerocopy_threshold": 0 00:18:04.294 } 00:18:04.294 } 00:18:04.294 ] 00:18:04.294 }, 00:18:04.294 { 00:18:04.294 "subsystem": "vmd", 00:18:04.294 "config": [] 00:18:04.294 }, 00:18:04.294 { 00:18:04.294 "subsystem": "accel", 00:18:04.294 "config": [ 00:18:04.294 { 00:18:04.294 "method": "accel_set_options", 00:18:04.294 "params": { 00:18:04.294 "buf_count": 2048, 00:18:04.294 "large_cache_size": 16, 00:18:04.294 "sequence_count": 2048, 00:18:04.294 "small_cache_size": 128, 00:18:04.294 "task_count": 2048 00:18:04.294 } 00:18:04.294 } 00:18:04.294 ] 00:18:04.294 }, 00:18:04.294 { 00:18:04.294 "subsystem": "bdev", 00:18:04.294 "config": [ 00:18:04.294 { 00:18:04.294 "method": "bdev_set_options", 00:18:04.294 "params": { 00:18:04.294 "bdev_auto_examine": true, 00:18:04.294 "bdev_io_cache_size": 256, 00:18:04.294 "bdev_io_pool_size": 65535, 00:18:04.294 "iobuf_large_cache_size": 16, 00:18:04.294 "iobuf_small_cache_size": 128 00:18:04.294 } 00:18:04.294 }, 00:18:04.294 { 00:18:04.294 "method": "bdev_raid_set_options", 00:18:04.294 "params": { 00:18:04.294 "process_window_size_kb": 1024 00:18:04.294 } 00:18:04.294 }, 00:18:04.294 { 00:18:04.294 "method": "bdev_iscsi_set_options", 00:18:04.294 "params": { 00:18:04.294 "timeout_sec": 30 00:18:04.294 } 00:18:04.294 }, 00:18:04.294 { 00:18:04.294 "method": "bdev_nvme_set_options", 00:18:04.294 "params": { 00:18:04.294 "action_on_timeout": "none", 00:18:04.294 "allow_accel_sequence": false, 00:18:04.294 "arbitration_burst": 0, 00:18:04.294 "bdev_retry_count": 3, 00:18:04.294 "ctrlr_loss_timeout_sec": 0, 00:18:04.294 "delay_cmd_submit": true, 00:18:04.294 "fast_io_fail_timeout_sec": 0, 00:18:04.294 "generate_uuids": false, 00:18:04.294 "high_priority_weight": 0, 00:18:04.294 "io_path_stat": false, 00:18:04.295 "io_queue_requests": 512, 00:18:04.295 "keep_alive_timeout_ms": 10000, 00:18:04.295 "low_priority_weight": 0, 00:18:04.295 "medium_priority_weight": 0, 00:18:04.295 "nvme_adminq_poll_period_us": 10000, 00:18:04.295 "nvme_ioq_poll_period_us": 0, 00:18:04.295 "reconnect_delay_sec": 0, 00:18:04.295 "timeout_admin_us": 0, 00:18:04.295 "timeout_us": 0, 00:18:04.295 "transport_ack_timeout": 0, 00:18:04.295 "transport_retry_count": 4, 00:18:04.295 "transport_tos": 0 00:18:04.295 } 00:18:04.295 }, 00:18:04.295 { 00:18:04.295 "method": "bdev_nvme_attach_controller", 00:18:04.295 "params": { 00:18:04.295 "adrfam": "IPv4", 00:18:04.295 "ctrlr_loss_timeout_sec": 0, 00:18:04.295 "ddgst": false, 00:18:04.295 "fast_io_fail_timeout_sec": 0, 00:18:04.295 "hdgst": false, 00:18:04.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.295 "name": "TLSTEST", 00:18:04.295 "prchk_guard": false, 00:18:04.295 "prchk_reftag": false, 00:18:04.295 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:18:04.295 "reconnect_delay_sec": 0, 00:18:04.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.295 "traddr": "10.0.0.2", 00:18:04.295 "trsvcid": "4420", 00:18:04.295 "trtype": "TCP" 00:18:04.295 } 00:18:04.295 }, 00:18:04.295 { 00:18:04.295 "method": "bdev_nvme_set_hotplug", 00:18:04.295 "params": { 00:18:04.295 "enable": false, 00:18:04.295 "period_us": 100000 00:18:04.295 } 00:18:04.295 }, 00:18:04.295 { 00:18:04.295 "method": "bdev_wait_for_examine" 00:18:04.295 } 00:18:04.295 ] 00:18:04.295 }, 00:18:04.295 { 00:18:04.295 "subsystem": "nbd", 00:18:04.295 "config": [] 00:18:04.295 } 00:18:04.295 ] 00:18:04.295 }' 00:18:04.295 10:18:23 -- target/tls.sh@208 -- # killprocess 89058 00:18:04.295 10:18:23 -- common/autotest_common.sh@936 -- # '[' -z 89058 ']' 00:18:04.295 10:18:23 -- common/autotest_common.sh@940 -- # kill -0 89058 00:18:04.295 10:18:23 -- common/autotest_common.sh@941 -- # uname 00:18:04.295 10:18:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.295 10:18:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89058 00:18:04.295 10:18:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:04.295 10:18:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:04.295 killing process with pid 89058 00:18:04.295 10:18:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89058' 00:18:04.295 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.295 00:18:04.295 Latency(us) 00:18:04.295 [2024-11-19T10:18:23.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.295 [2024-11-19T10:18:23.841Z] =================================================================================================================== 00:18:04.295 [2024-11-19T10:18:23.841Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:04.295 10:18:23 -- common/autotest_common.sh@955 -- # kill 89058 00:18:04.295 10:18:23 -- common/autotest_common.sh@960 -- # wait 89058 00:18:04.295 10:18:23 -- target/tls.sh@209 -- # killprocess 88968 00:18:04.295 10:18:23 -- common/autotest_common.sh@936 -- # '[' -z 88968 ']' 00:18:04.295 10:18:23 -- common/autotest_common.sh@940 -- # kill -0 88968 00:18:04.295 10:18:23 -- common/autotest_common.sh@941 -- # uname 00:18:04.295 10:18:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:04.295 10:18:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88968 00:18:04.554 10:18:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:04.554 10:18:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:04.554 killing process with pid 88968 00:18:04.554 10:18:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88968' 00:18:04.554 10:18:23 -- common/autotest_common.sh@955 -- # kill 88968 00:18:04.554 10:18:23 -- common/autotest_common.sh@960 -- # wait 88968 00:18:04.554 10:18:24 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:04.554 10:18:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:04.554 10:18:24 -- target/tls.sh@212 -- # echo '{ 00:18:04.554 "subsystems": [ 00:18:04.554 { 00:18:04.554 "subsystem": "iobuf", 00:18:04.554 "config": [ 00:18:04.554 { 00:18:04.554 "method": "iobuf_set_options", 00:18:04.554 "params": { 00:18:04.554 "large_bufsize": 135168, 00:18:04.554 "large_pool_count": 1024, 00:18:04.554 "small_bufsize": 8192, 00:18:04.554 "small_pool_count": 8192 00:18:04.554 } 00:18:04.554 } 00:18:04.554 ] 00:18:04.554 }, 00:18:04.554 { 00:18:04.554 "subsystem": "sock", 00:18:04.554 "config": [ 00:18:04.554 { 00:18:04.554 "method": "sock_impl_set_options", 00:18:04.554 "params": { 00:18:04.554 "enable_ktls": false, 00:18:04.554 "enable_placement_id": 0, 00:18:04.554 "enable_quickack": false, 00:18:04.554 "enable_recv_pipe": true, 00:18:04.554 "enable_zerocopy_send_client": false, 00:18:04.554 "enable_zerocopy_send_server": true, 00:18:04.554 "impl_name": "posix", 00:18:04.554 "recv_buf_size": 2097152, 00:18:04.554 "send_buf_size": 2097152, 00:18:04.554 "tls_version": 0, 00:18:04.554 "zerocopy_threshold": 0 00:18:04.554 } 00:18:04.554 }, 00:18:04.554 { 00:18:04.554 "method": "sock_impl_set_options", 00:18:04.554 "params": { 00:18:04.554 "enable_ktls": false, 00:18:04.554 "enable_placement_id": 0, 00:18:04.554 "enable_quickack": false, 00:18:04.554 "enable_recv_pipe": true, 00:18:04.554 "enable_zerocopy_send_client": false, 00:18:04.554 "enable_zerocopy_send_server": true, 00:18:04.554 "impl_name": "ssl", 00:18:04.554 "recv_buf_size": 4096, 00:18:04.554 "send_buf_size": 4096, 00:18:04.554 "tls_version": 0, 00:18:04.554 "zerocopy_threshold": 0 00:18:04.554 } 00:18:04.554 } 00:18:04.554 ] 00:18:04.554 }, 00:18:04.554 { 00:18:04.554 "subsystem": "vmd", 00:18:04.554 "config": [] 00:18:04.554 }, 00:18:04.554 { 00:18:04.554 "subsystem": "accel", 00:18:04.554 "config": [ 00:18:04.554 { 00:18:04.554 "method": "accel_set_options", 00:18:04.554 "params": { 00:18:04.554 "buf_count": 2048, 00:18:04.554 "large_cache_size": 16, 00:18:04.554 "sequence_count": 2048, 00:18:04.554 "small_cache_size": 128, 00:18:04.554 "task_count": 2048 00:18:04.554 } 00:18:04.554 } 00:18:04.554 ] 00:18:04.554 }, 00:18:04.554 { 00:18:04.554 "subsystem": "bdev", 00:18:04.554 "config": [ 00:18:04.554 { 00:18:04.554 "method": "bdev_set_options", 00:18:04.554 "params": { 00:18:04.554 "bdev_auto_examine": true, 00:18:04.554 "bdev_io_cache_size": 256, 00:18:04.554 "bdev_io_pool_size": 65535, 00:18:04.554 "iobuf_large_cache_size": 16, 00:18:04.554 "iobuf_small_cache_size": 128 00:18:04.554 } 00:18:04.554 }, 00:18:04.554 { 00:18:04.554 "method": "bdev_raid_set_options", 00:18:04.554 "params": { 00:18:04.554 "process_window_size_kb": 1024 00:18:04.554 } 00:18:04.554 }, 00:18:04.554 { 00:18:04.554 "method": "bdev_iscsi_set_options", 00:18:04.554 "params": { 00:18:04.554 "timeout_sec": 30 00:18:04.554 } 00:18:04.554 }, 00:18:04.554 { 00:18:04.554 "method": "bdev_nvme_set_options", 00:18:04.554 "params": { 00:18:04.554 "action_on_timeout": "none", 00:18:04.554 "allow_accel_sequence": false, 00:18:04.554 "arbitration_burst": 0, 00:18:04.554 "bdev_retry_count": 3, 00:18:04.554 "ctrlr_loss_timeout_sec": 0, 00:18:04.554 "delay_cmd_submit": true, 00:18:04.554 "fast_io_fail_timeout_sec": 0, 00:18:04.554 "generate_uuids": false, 00:18:04.554 "high_priority_weight": 0, 00:18:04.554 "io_path_stat": false, 00:18:04.554 "io_queue_requests": 0, 00:18:04.554 "keep_alive_timeout_ms": 10000, 00:18:04.554 "low_priority_weight": 0, 00:18:04.554 "medium_priority_weight": 0, 00:18:04.554 "nvme_adminq_poll_period_us": 10000, 00:18:04.554 "nvme_ioq_poll_period_us": 0, 00:18:04.554 "reconnect_delay_sec": 0, 00:18:04.554 "timeout_admin_us": 0, 00:18:04.554 "timeout_us": 0, 00:18:04.554 "transport_ack_timeout": 0, 00:18:04.554 "transport_retry_count": 4, 00:18:04.554 "transport_tos": 0 00:18:04.555 } 00:18:04.555 }, 00:18:04.555 { 00:18:04.555 "method": "bdev_nvme_set_hotplug", 00:18:04.555 "params": { 00:18:04.555 "enable": false, 00:18:04.555 "period_us": 100000 00:18:04.555 } 00:18:04.555 }, 00:18:04.555 { 00:18:04.555 "method": "bdev_malloc_create", 00:18:04.555 "params": { 00:18:04.555 "block_size": 4096, 00:18:04.555 "name": "malloc0", 00:18:04.555 "num_blocks": 8192, 00:18:04.555 "optimal_io_boundary": 0, 00:18:04.555 "physical_block_size": 4096, 00:18:04.555 "uuid": "d7aac495-79dd-428c-bd31-05d8cc992161" 00:18:04.555 } 00:18:04.555 }, 00:18:04.555 { 00:18:04.555 "method": "bdev_wait_for_examine" 00:18:04.555 } 00:18:04.555 ] 00:18:04.555 }, 00:18:04.555 { 00:18:04.555 "subsystem": "nbd", 00:18:04.555 "config": [] 00:18:04.555 }, 00:18:04.555 { 00:18:04.555 "subsystem": "scheduler", 00:18:04.555 "config": [ 00:18:04.555 { 00:18:04.555 "method": "framework_set_scheduler", 00:18:04.555 "params": { 00:18:04.555 "name": "static" 00:18:04.555 } 00:18:04.555 } 00:18:04.555 ] 00:18:04.555 }, 00:18:04.555 { 00:18:04.555 "subsystem": "nvmf", 00:18:04.555 "config": [ 00:18:04.555 { 00:18:04.555 "method": "nvmf_set_config", 00:18:04.555 "params": { 00:18:04.555 "admin_cmd_passthru": { 00:18:04.555 "identify_ctrlr": false 00:18:04.555 }, 00:18:04.555 "discovery_filter": "match_any" 00:18:04.555 } 00:18:04.555 }, 00:18:04.555 { 00:18:04.555 "method": "nvmf_set_max_subsystems", 00:18:04.555 "params": { 00:18:04.555 "max_subsystems": 1024 00:18:04.555 } 00:18:04.555 }, 00:18:04.555 { 00:18:04.555 "method": "nvmf_set_crdt", 00:18:04.555 "params": { 00:18:04.555 "crdt1": 0, 00:18:04.555 "crdt2": 0, 00:18:04.555 "crdt3": 0 00:18:04.555 } 00:18:04.555 }, 00:18:04.555 { 00:18:04.555 "method": "nvmf_create_transport", 00:18:04.555 "params": { 00:18:04.555 "abort_timeout_sec": 1, 00:18:04.555 "buf_cache_size": 4294967295, 00:18:04.555 "c2h_success": false, 00:18:04.555 "dif_insert_or_strip": false, 00:18:04.555 "in_capsule_data_size": 4096, 00:18:04.555 "io_unit_size": 131072, 00:18:04.555 "max_aq_depth": 128, 00:18:04.555 "max_io_qpairs_per_ctrlr": 127, 00:18:04.555 "max_io_size": 131072, 00:18:04.555 "max_queue_depth": 128, 00:18:04.555 "num_shared_buffers": 511, 00:18:04.555 "sock_priority": 0, 00:18:04.555 "trtype": "TCP", 00:18:04.555 "zcopy": false 00:18:04.555 } 00:18:04.555 }, 00:18:04.555 { 00:18:04.555 "method": "nvmf_create_subsystem", 00:18:04.555 "params": { 00:18:04.555 "allow_any_host": false, 00:18:04.555 "ana_reporting": false, 00:18:04.555 "max_cntlid": 65519, 00:18:04.555 "max_namespaces": 10, 00:18:04.555 "min_cntlid": 1, 00:18:04.555 "model_number": "SPDK bdev Controller", 00:18:04.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.555 "serial_number": "SPDK00000000000001" 00:18:04.555 } 00:18:04.555 }, 00:18:04.555 { 00:18:04.555 "method": "nvmf_subsystem_add_host", 00:18:04.555 "params": { 00:18:04.555 "host": "nqn.2016-06.io.spdk:host1", 00:18:04.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.555 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:18:04.555 } 00:18:04.555 }, 00:18:04.555 { 00:18:04.555 "method": "nvmf_subsystem_add_ns", 00:18:04.555 "params": { 00:18:04.555 "namespace": { 00:18:04.555 "bdev_name": "malloc0", 00:18:04.555 "nguid": "D7AAC49579DD428CBD3105D8CC992161", 00:18:04.555 "nsid": 1, 00:18:04.555 "uuid": "d7aac495-79dd-428c-bd31-05d8cc992161" 00:18:04.555 }, 00:18:04.555 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:18:04.555 } 00:18:04.555 }, 00:18:04.555 { 00:18:04.555 "method": "nvmf_subsystem_add_listener", 00:18:04.555 "params": { 00:18:04.555 "listen_address": { 00:18:04.555 "adrfam": "IPv4", 00:18:04.555 "traddr": "10.0.0.2", 00:18:04.555 "trsvcid": "4420", 00:18:04.555 "trtype": "TCP" 00:18:04.555 }, 00:18:04.555 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.555 "secure_channel": true 00:18:04.555 } 00:18:04.555 } 00:18:04.555 ] 00:18:04.555 } 00:18:04.555 ] 00:18:04.555 }' 00:18:04.555 10:18:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:04.555 10:18:24 -- common/autotest_common.sh@10 -- # set +x 00:18:04.555 10:18:24 -- nvmf/common.sh@469 -- # nvmfpid=89118 00:18:04.555 10:18:24 -- nvmf/common.sh@470 -- # waitforlisten 89118 00:18:04.555 10:18:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:04.555 10:18:24 -- common/autotest_common.sh@829 -- # '[' -z 89118 ']' 00:18:04.555 10:18:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.555 10:18:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.555 10:18:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.555 10:18:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.555 10:18:24 -- common/autotest_common.sh@10 -- # set +x 00:18:04.555 [2024-11-19 10:18:24.062173] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:04.555 [2024-11-19 10:18:24.062273] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.814 [2024-11-19 10:18:24.197318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.814 [2024-11-19 10:18:24.230119] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:04.814 [2024-11-19 10:18:24.230260] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.814 [2024-11-19 10:18:24.230273] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.814 [2024-11-19 10:18:24.230282] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.814 [2024-11-19 10:18:24.230314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.072 [2024-11-19 10:18:24.400594] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.072 [2024-11-19 10:18:24.432547] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.072 [2024-11-19 10:18:24.432745] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.639 10:18:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.639 10:18:25 -- common/autotest_common.sh@862 -- # return 0 00:18:05.639 10:18:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:05.639 10:18:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:05.639 10:18:25 -- common/autotest_common.sh@10 -- # set +x 00:18:05.639 10:18:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.639 10:18:25 -- target/tls.sh@216 -- # bdevperf_pid=89162 00:18:05.639 10:18:25 -- target/tls.sh@217 -- # waitforlisten 89162 /var/tmp/bdevperf.sock 00:18:05.639 10:18:25 -- common/autotest_common.sh@829 -- # '[' -z 89162 ']' 00:18:05.639 10:18:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.639 10:18:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.639 10:18:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.639 10:18:25 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:05.639 10:18:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.639 10:18:25 -- target/tls.sh@213 -- # echo '{ 00:18:05.639 "subsystems": [ 00:18:05.639 { 00:18:05.639 "subsystem": "iobuf", 00:18:05.639 "config": [ 00:18:05.639 { 00:18:05.639 "method": "iobuf_set_options", 00:18:05.639 "params": { 00:18:05.639 "large_bufsize": 135168, 00:18:05.639 "large_pool_count": 1024, 00:18:05.639 "small_bufsize": 8192, 00:18:05.639 "small_pool_count": 8192 00:18:05.639 } 00:18:05.639 } 00:18:05.639 ] 00:18:05.639 }, 00:18:05.639 { 00:18:05.639 "subsystem": "sock", 00:18:05.639 "config": [ 00:18:05.639 { 00:18:05.639 "method": "sock_impl_set_options", 00:18:05.639 "params": { 00:18:05.639 "enable_ktls": false, 00:18:05.639 "enable_placement_id": 0, 00:18:05.639 "enable_quickack": false, 00:18:05.639 "enable_recv_pipe": true, 00:18:05.639 "enable_zerocopy_send_client": false, 00:18:05.639 "enable_zerocopy_send_server": true, 00:18:05.639 "impl_name": "posix", 00:18:05.639 "recv_buf_size": 2097152, 00:18:05.639 "send_buf_size": 2097152, 00:18:05.639 "tls_version": 0, 00:18:05.639 "zerocopy_threshold": 0 00:18:05.639 } 00:18:05.639 }, 00:18:05.639 { 00:18:05.639 "method": "sock_impl_set_options", 00:18:05.639 "params": { 00:18:05.639 "enable_ktls": false, 00:18:05.639 "enable_placement_id": 0, 00:18:05.639 "enable_quickack": false, 00:18:05.639 "enable_recv_pipe": true, 00:18:05.639 "enable_zerocopy_send_client": false, 00:18:05.639 "enable_zerocopy_send_server": true, 00:18:05.639 "impl_name": "ssl", 00:18:05.639 "recv_buf_size": 4096, 00:18:05.639 "send_buf_size": 4096, 00:18:05.639 "tls_version": 0, 00:18:05.639 "zerocopy_threshold": 0 00:18:05.639 } 00:18:05.639 } 00:18:05.639 ] 00:18:05.639 }, 00:18:05.639 { 00:18:05.639 "subsystem": "vmd", 00:18:05.639 "config": [] 00:18:05.639 }, 00:18:05.639 { 00:18:05.639 "subsystem": "accel", 00:18:05.639 "config": [ 00:18:05.639 { 00:18:05.639 "method": "accel_set_options", 00:18:05.639 "params": { 00:18:05.639 "buf_count": 2048, 00:18:05.639 "large_cache_size": 16, 00:18:05.639 "sequence_count": 2048, 00:18:05.639 "small_cache_size": 128, 00:18:05.640 "task_count": 2048 00:18:05.640 } 00:18:05.640 } 00:18:05.640 ] 00:18:05.640 }, 00:18:05.640 { 00:18:05.640 "subsystem": "bdev", 00:18:05.640 "config": [ 00:18:05.640 { 00:18:05.640 "method": "bdev_set_options", 00:18:05.640 "params": { 00:18:05.640 "bdev_auto_examine": true, 00:18:05.640 "bdev_io_cache_size": 256, 00:18:05.640 "bdev_io_pool_size": 65535, 00:18:05.640 "iobuf_large_cache_size": 16, 00:18:05.640 "iobuf_small_cache_size": 128 00:18:05.640 } 00:18:05.640 }, 00:18:05.640 { 00:18:05.640 "method": "bdev_raid_set_options", 00:18:05.640 "params": { 00:18:05.640 "process_window_size_kb": 1024 00:18:05.640 } 00:18:05.640 }, 00:18:05.640 { 00:18:05.640 "method": "bdev_iscsi_set_options", 00:18:05.640 "params": { 00:18:05.640 "timeout_sec": 30 00:18:05.640 } 00:18:05.640 }, 00:18:05.640 { 00:18:05.640 "method": "bdev_nvme_set_options", 00:18:05.640 "params": { 00:18:05.640 "action_on_timeout": "none", 00:18:05.640 "allow_accel_sequence": false, 00:18:05.640 "arbitration_burst": 0, 00:18:05.640 "bdev_retry_count": 3, 00:18:05.640 "ctrlr_loss_timeout_sec": 0, 00:18:05.640 "delay_cmd_submit": true, 00:18:05.640 "fast_io_fail_timeout_sec": 0, 00:18:05.640 "generate_uuids": false, 00:18:05.640 "high_priority_weight": 0, 00:18:05.640 "io_path_stat": false, 00:18:05.640 "io_queue_requests": 512, 00:18:05.640 "keep_alive_timeout_ms": 10000, 00:18:05.640 "low_priority_weight": 0, 00:18:05.640 "medium_priority_weight": 0, 00:18:05.640 "nvme_adminq_poll_period_us": 10000, 00:18:05.640 "nvme_ioq_poll_period_us": 0, 00:18:05.640 "reconnect_delay_sec": 0, 00:18:05.640 "timeout_admin_us": 0, 00:18:05.640 "timeout_us": 0, 00:18:05.640 "transport_ack_timeout": 0, 00:18:05.640 "transport_retry_count": 4, 00:18:05.640 "transport_tos": 0 00:18:05.640 } 00:18:05.640 }, 00:18:05.640 { 00:18:05.640 "method": "bdev_nvme_attach_controller", 00:18:05.640 "params": { 00:18:05.640 "adrfam": "IPv4", 00:18:05.640 "ctrlr_loss_timeout_sec": 0, 00:18:05.640 "ddgst": false, 00:18:05.640 "fast_io_fail_timeout_sec": 0, 00:18:05.640 "hdgst": false, 00:18:05.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:05.640 "name": "TLSTEST", 00:18:05.640 "prchk_guard": false, 00:18:05.640 "prchk_reftag": false, 00:18:05.640 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:18:05.640 "reconnect_delay_sec": 0, 00:18:05.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.640 "traddr": "10.0.0.2", 00:18:05.640 "trsvcid": "4420", 00:18:05.640 "trtype": "TCP" 00:18:05.640 } 00:18:05.640 }, 00:18:05.640 { 00:18:05.640 "method": "bdev_nvme_set_hotplug", 00:18:05.640 "params": { 00:18:05.640 "enable": false, 00:18:05.640 "period_us": 100000 00:18:05.640 } 00:18:05.640 }, 00:18:05.640 { 00:18:05.640 "method": "bdev_wait_for_examine" 00:18:05.640 } 00:18:05.640 ] 00:18:05.640 }, 00:18:05.640 { 00:18:05.640 "subsystem": "nbd", 00:18:05.640 "config": [] 00:18:05.640 } 00:18:05.640 ] 00:18:05.640 }' 00:18:05.640 10:18:25 -- common/autotest_common.sh@10 -- # set +x 00:18:05.640 [2024-11-19 10:18:25.172403] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:05.640 [2024-11-19 10:18:25.172486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89162 ] 00:18:05.898 [2024-11-19 10:18:25.306941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.898 [2024-11-19 10:18:25.346450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.157 [2024-11-19 10:18:25.466191] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.775 10:18:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.775 10:18:26 -- common/autotest_common.sh@862 -- # return 0 00:18:06.775 10:18:26 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:06.775 Running I/O for 10 seconds... 00:18:19.006 00:18:19.006 Latency(us) 00:18:19.006 [2024-11-19T10:18:38.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.006 [2024-11-19T10:18:38.552Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:19.006 Verification LBA range: start 0x0 length 0x2000 00:18:19.006 TLSTESTn1 : 10.02 5151.43 20.12 0.00 0.00 24807.87 4796.04 33602.09 00:18:19.006 [2024-11-19T10:18:38.552Z] =================================================================================================================== 00:18:19.006 [2024-11-19T10:18:38.552Z] Total : 5151.43 20.12 0.00 0.00 24807.87 4796.04 33602.09 00:18:19.006 0 00:18:19.006 10:18:36 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:19.006 10:18:36 -- target/tls.sh@223 -- # killprocess 89162 00:18:19.006 10:18:36 -- common/autotest_common.sh@936 -- # '[' -z 89162 ']' 00:18:19.006 10:18:36 -- common/autotest_common.sh@940 -- # kill -0 89162 00:18:19.006 10:18:36 -- common/autotest_common.sh@941 -- # uname 00:18:19.006 10:18:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:19.006 10:18:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89162 00:18:19.006 10:18:36 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:19.006 10:18:36 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:19.006 killing process with pid 89162 00:18:19.006 10:18:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89162' 00:18:19.006 10:18:36 -- common/autotest_common.sh@955 -- # kill 89162 00:18:19.006 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.006 00:18:19.006 Latency(us) 00:18:19.006 [2024-11-19T10:18:38.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.006 [2024-11-19T10:18:38.552Z] =================================================================================================================== 00:18:19.006 [2024-11-19T10:18:38.552Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:19.006 10:18:36 -- common/autotest_common.sh@960 -- # wait 89162 00:18:19.006 10:18:36 -- target/tls.sh@224 -- # killprocess 89118 00:18:19.006 10:18:36 -- common/autotest_common.sh@936 -- # '[' -z 89118 ']' 00:18:19.006 10:18:36 -- common/autotest_common.sh@940 -- # kill -0 89118 00:18:19.006 10:18:36 -- common/autotest_common.sh@941 -- # uname 00:18:19.006 10:18:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:19.006 10:18:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89118 00:18:19.006 10:18:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:19.006 10:18:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:19.006 killing process with pid 89118 00:18:19.006 10:18:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89118' 00:18:19.006 10:18:36 -- common/autotest_common.sh@955 -- # kill 89118 00:18:19.006 10:18:36 -- common/autotest_common.sh@960 -- # wait 89118 00:18:19.006 10:18:36 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:18:19.006 10:18:36 -- target/tls.sh@227 -- # cleanup 00:18:19.006 10:18:36 -- target/tls.sh@15 -- # process_shm --id 0 00:18:19.006 10:18:36 -- common/autotest_common.sh@806 -- # type=--id 00:18:19.006 10:18:36 -- common/autotest_common.sh@807 -- # id=0 00:18:19.006 10:18:36 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:19.006 10:18:36 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:19.006 10:18:36 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:19.006 10:18:36 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:19.006 10:18:36 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:19.006 10:18:36 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:19.006 nvmf_trace.0 00:18:19.006 10:18:36 -- common/autotest_common.sh@821 -- # return 0 00:18:19.006 10:18:36 -- target/tls.sh@16 -- # killprocess 89162 00:18:19.006 10:18:36 -- common/autotest_common.sh@936 -- # '[' -z 89162 ']' 00:18:19.006 10:18:36 -- common/autotest_common.sh@940 -- # kill -0 89162 00:18:19.006 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89162) - No such process 00:18:19.006 Process with pid 89162 is not found 00:18:19.006 10:18:36 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89162 is not found' 00:18:19.006 10:18:36 -- target/tls.sh@17 -- # nvmftestfini 00:18:19.006 10:18:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:19.006 10:18:36 -- nvmf/common.sh@116 -- # sync 00:18:19.007 10:18:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:19.007 10:18:36 -- nvmf/common.sh@119 -- # set +e 00:18:19.007 10:18:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:19.007 10:18:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:19.007 rmmod nvme_tcp 00:18:19.007 rmmod nvme_fabrics 00:18:19.007 rmmod nvme_keyring 00:18:19.007 10:18:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:19.007 10:18:36 -- nvmf/common.sh@123 -- # set -e 00:18:19.007 10:18:36 -- nvmf/common.sh@124 -- # return 0 00:18:19.007 10:18:36 -- nvmf/common.sh@477 -- # '[' -n 89118 ']' 00:18:19.007 10:18:36 -- nvmf/common.sh@478 -- # killprocess 89118 00:18:19.007 10:18:36 -- common/autotest_common.sh@936 -- # '[' -z 89118 ']' 00:18:19.007 10:18:36 -- common/autotest_common.sh@940 -- # kill -0 89118 00:18:19.007 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89118) - No such process 00:18:19.007 Process with pid 89118 is not found 00:18:19.007 10:18:36 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89118 is not found' 00:18:19.007 10:18:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:19.007 10:18:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:19.007 10:18:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:19.007 10:18:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.007 10:18:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:19.007 10:18:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.007 10:18:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.007 10:18:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.007 10:18:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:19.007 10:18:36 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:18:19.007 00:18:19.007 real 1m7.237s 00:18:19.007 user 1m43.109s 00:18:19.007 sys 0m23.795s 00:18:19.007 10:18:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:19.007 ************************************ 00:18:19.007 END TEST nvmf_tls 00:18:19.007 10:18:36 -- common/autotest_common.sh@10 -- # set +x 00:18:19.007 ************************************ 00:18:19.007 10:18:36 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:19.007 10:18:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:19.007 10:18:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:19.007 10:18:36 -- common/autotest_common.sh@10 -- # set +x 00:18:19.007 ************************************ 00:18:19.007 START TEST nvmf_fips 00:18:19.007 ************************************ 00:18:19.007 10:18:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:19.007 * Looking for test storage... 00:18:19.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:19.007 10:18:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:19.007 10:18:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:19.007 10:18:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:19.007 10:18:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:19.007 10:18:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:19.007 10:18:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:19.007 10:18:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:19.007 10:18:37 -- scripts/common.sh@335 -- # IFS=.-: 00:18:19.007 10:18:37 -- scripts/common.sh@335 -- # read -ra ver1 00:18:19.007 10:18:37 -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.007 10:18:37 -- scripts/common.sh@336 -- # read -ra ver2 00:18:19.007 10:18:37 -- scripts/common.sh@337 -- # local 'op=<' 00:18:19.007 10:18:37 -- scripts/common.sh@339 -- # ver1_l=2 00:18:19.007 10:18:37 -- scripts/common.sh@340 -- # ver2_l=1 00:18:19.007 10:18:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:19.007 10:18:37 -- scripts/common.sh@343 -- # case "$op" in 00:18:19.007 10:18:37 -- scripts/common.sh@344 -- # : 1 00:18:19.007 10:18:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:19.007 10:18:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.007 10:18:37 -- scripts/common.sh@364 -- # decimal 1 00:18:19.007 10:18:37 -- scripts/common.sh@352 -- # local d=1 00:18:19.007 10:18:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.007 10:18:37 -- scripts/common.sh@354 -- # echo 1 00:18:19.007 10:18:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:19.007 10:18:37 -- scripts/common.sh@365 -- # decimal 2 00:18:19.007 10:18:37 -- scripts/common.sh@352 -- # local d=2 00:18:19.007 10:18:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.007 10:18:37 -- scripts/common.sh@354 -- # echo 2 00:18:19.007 10:18:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:19.007 10:18:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:19.007 10:18:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:19.007 10:18:37 -- scripts/common.sh@367 -- # return 0 00:18:19.007 10:18:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.007 10:18:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:19.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.007 --rc genhtml_branch_coverage=1 00:18:19.007 --rc genhtml_function_coverage=1 00:18:19.007 --rc genhtml_legend=1 00:18:19.007 --rc geninfo_all_blocks=1 00:18:19.007 --rc geninfo_unexecuted_blocks=1 00:18:19.007 00:18:19.007 ' 00:18:19.007 10:18:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:19.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.007 --rc genhtml_branch_coverage=1 00:18:19.007 --rc genhtml_function_coverage=1 00:18:19.007 --rc genhtml_legend=1 00:18:19.007 --rc geninfo_all_blocks=1 00:18:19.007 --rc geninfo_unexecuted_blocks=1 00:18:19.007 00:18:19.007 ' 00:18:19.007 10:18:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:19.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.007 --rc genhtml_branch_coverage=1 00:18:19.007 --rc genhtml_function_coverage=1 00:18:19.007 --rc genhtml_legend=1 00:18:19.007 --rc geninfo_all_blocks=1 00:18:19.007 --rc geninfo_unexecuted_blocks=1 00:18:19.007 00:18:19.007 ' 00:18:19.007 10:18:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:19.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.007 --rc genhtml_branch_coverage=1 00:18:19.007 --rc genhtml_function_coverage=1 00:18:19.007 --rc genhtml_legend=1 00:18:19.007 --rc geninfo_all_blocks=1 00:18:19.007 --rc geninfo_unexecuted_blocks=1 00:18:19.007 00:18:19.007 ' 00:18:19.007 10:18:37 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:19.007 10:18:37 -- nvmf/common.sh@7 -- # uname -s 00:18:19.007 10:18:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.007 10:18:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.007 10:18:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.007 10:18:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.007 10:18:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:19.007 10:18:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:19.007 10:18:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.007 10:18:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:19.007 10:18:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.007 10:18:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:19.007 10:18:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:18:19.007 10:18:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:18:19.007 10:18:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.007 10:18:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:19.007 10:18:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:19.007 10:18:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:19.007 10:18:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.007 10:18:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.007 10:18:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.007 10:18:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.007 10:18:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.007 10:18:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.007 10:18:37 -- paths/export.sh@5 -- # export PATH 00:18:19.007 10:18:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.007 10:18:37 -- nvmf/common.sh@46 -- # : 0 00:18:19.007 10:18:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:19.007 10:18:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:19.007 10:18:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:19.007 10:18:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.008 10:18:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.008 10:18:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:19.008 10:18:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:19.008 10:18:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:19.008 10:18:37 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:19.008 10:18:37 -- fips/fips.sh@89 -- # check_openssl_version 00:18:19.008 10:18:37 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:19.008 10:18:37 -- fips/fips.sh@85 -- # openssl version 00:18:19.008 10:18:37 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:19.008 10:18:37 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:18:19.008 10:18:37 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:19.008 10:18:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:19.008 10:18:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:19.008 10:18:37 -- scripts/common.sh@335 -- # IFS=.-: 00:18:19.008 10:18:37 -- scripts/common.sh@335 -- # read -ra ver1 00:18:19.008 10:18:37 -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.008 10:18:37 -- scripts/common.sh@336 -- # read -ra ver2 00:18:19.008 10:18:37 -- scripts/common.sh@337 -- # local 'op=>=' 00:18:19.008 10:18:37 -- scripts/common.sh@339 -- # ver1_l=3 00:18:19.008 10:18:37 -- scripts/common.sh@340 -- # ver2_l=3 00:18:19.008 10:18:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:19.008 10:18:37 -- scripts/common.sh@343 -- # case "$op" in 00:18:19.008 10:18:37 -- scripts/common.sh@347 -- # : 1 00:18:19.008 10:18:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:19.008 10:18:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.008 10:18:37 -- scripts/common.sh@364 -- # decimal 3 00:18:19.008 10:18:37 -- scripts/common.sh@352 -- # local d=3 00:18:19.008 10:18:37 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:19.008 10:18:37 -- scripts/common.sh@354 -- # echo 3 00:18:19.008 10:18:37 -- scripts/common.sh@364 -- # ver1[v]=3 00:18:19.008 10:18:37 -- scripts/common.sh@365 -- # decimal 3 00:18:19.008 10:18:37 -- scripts/common.sh@352 -- # local d=3 00:18:19.008 10:18:37 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:19.008 10:18:37 -- scripts/common.sh@354 -- # echo 3 00:18:19.008 10:18:37 -- scripts/common.sh@365 -- # ver2[v]=3 00:18:19.008 10:18:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:19.008 10:18:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:19.008 10:18:37 -- scripts/common.sh@363 -- # (( v++ )) 00:18:19.008 10:18:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.008 10:18:37 -- scripts/common.sh@364 -- # decimal 1 00:18:19.008 10:18:37 -- scripts/common.sh@352 -- # local d=1 00:18:19.008 10:18:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.008 10:18:37 -- scripts/common.sh@354 -- # echo 1 00:18:19.008 10:18:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:19.008 10:18:37 -- scripts/common.sh@365 -- # decimal 0 00:18:19.008 10:18:37 -- scripts/common.sh@352 -- # local d=0 00:18:19.008 10:18:37 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:19.008 10:18:37 -- scripts/common.sh@354 -- # echo 0 00:18:19.008 10:18:37 -- scripts/common.sh@365 -- # ver2[v]=0 00:18:19.008 10:18:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:19.008 10:18:37 -- scripts/common.sh@366 -- # return 0 00:18:19.008 10:18:37 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:19.008 10:18:37 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:19.008 10:18:37 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:19.008 10:18:37 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:19.008 10:18:37 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:19.008 10:18:37 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:19.008 10:18:37 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:19.008 10:18:37 -- fips/fips.sh@113 -- # build_openssl_config 00:18:19.008 10:18:37 -- fips/fips.sh@37 -- # cat 00:18:19.008 10:18:37 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:19.008 10:18:37 -- fips/fips.sh@58 -- # cat - 00:18:19.008 10:18:37 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:19.008 10:18:37 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:19.008 10:18:37 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:19.008 10:18:37 -- fips/fips.sh@116 -- # openssl list -providers 00:18:19.008 10:18:37 -- fips/fips.sh@116 -- # grep name 00:18:19.008 10:18:37 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:19.008 10:18:37 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:19.008 10:18:37 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:19.008 10:18:37 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:19.008 10:18:37 -- common/autotest_common.sh@650 -- # local es=0 00:18:19.008 10:18:37 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:19.008 10:18:37 -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:19.008 10:18:37 -- fips/fips.sh@127 -- # : 00:18:19.008 10:18:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.008 10:18:37 -- common/autotest_common.sh@642 -- # type -t openssl 00:18:19.008 10:18:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.008 10:18:37 -- common/autotest_common.sh@644 -- # type -P openssl 00:18:19.008 10:18:37 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.008 10:18:37 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:19.008 10:18:37 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:19.008 10:18:37 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:19.008 Error setting digest 00:18:19.008 40B29670C37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:19.008 40B29670C37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:19.008 10:18:37 -- common/autotest_common.sh@653 -- # es=1 00:18:19.008 10:18:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.008 10:18:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.008 10:18:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.008 10:18:37 -- fips/fips.sh@130 -- # nvmftestinit 00:18:19.008 10:18:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:19.008 10:18:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.008 10:18:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:19.008 10:18:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:19.008 10:18:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:19.008 10:18:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.008 10:18:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.008 10:18:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.008 10:18:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:19.008 10:18:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:19.008 10:18:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:19.008 10:18:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:19.008 10:18:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:19.008 10:18:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:19.008 10:18:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.008 10:18:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.008 10:18:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:19.008 10:18:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:19.008 10:18:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:19.008 10:18:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:19.008 10:18:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:19.008 10:18:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.008 10:18:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:19.008 10:18:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:19.008 10:18:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:19.008 10:18:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:19.008 10:18:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:19.008 10:18:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:19.008 Cannot find device "nvmf_tgt_br" 00:18:19.008 10:18:37 -- nvmf/common.sh@154 -- # true 00:18:19.008 10:18:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:19.008 Cannot find device "nvmf_tgt_br2" 00:18:19.008 10:18:37 -- nvmf/common.sh@155 -- # true 00:18:19.008 10:18:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:19.008 10:18:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:19.008 Cannot find device "nvmf_tgt_br" 00:18:19.008 10:18:37 -- nvmf/common.sh@157 -- # true 00:18:19.008 10:18:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:19.008 Cannot find device "nvmf_tgt_br2" 00:18:19.008 10:18:37 -- nvmf/common.sh@158 -- # true 00:18:19.008 10:18:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:19.008 10:18:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:19.008 10:18:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:19.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.008 10:18:37 -- nvmf/common.sh@161 -- # true 00:18:19.008 10:18:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:19.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:19.008 10:18:37 -- nvmf/common.sh@162 -- # true 00:18:19.008 10:18:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:19.008 10:18:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:19.008 10:18:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:19.008 10:18:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:19.008 10:18:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:19.008 10:18:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:19.008 10:18:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:19.008 10:18:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:19.008 10:18:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:19.008 10:18:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:19.008 10:18:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:19.008 10:18:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:19.008 10:18:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:19.008 10:18:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:19.009 10:18:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:19.009 10:18:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:19.009 10:18:37 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:19.009 10:18:37 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:19.009 10:18:37 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:19.009 10:18:37 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:19.009 10:18:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:19.009 10:18:37 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:19.009 10:18:37 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:19.009 10:18:37 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:19.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:18:19.009 00:18:19.009 --- 10.0.0.2 ping statistics --- 00:18:19.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.009 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:18:19.009 10:18:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:19.009 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:19.009 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:18:19.009 00:18:19.009 --- 10.0.0.3 ping statistics --- 00:18:19.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.009 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:19.009 10:18:37 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:19.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:18:19.009 00:18:19.009 --- 10.0.0.1 ping statistics --- 00:18:19.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.009 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:19.009 10:18:37 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.009 10:18:37 -- nvmf/common.sh@421 -- # return 0 00:18:19.009 10:18:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:19.009 10:18:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.009 10:18:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:19.009 10:18:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:19.009 10:18:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.009 10:18:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:19.009 10:18:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:19.009 10:18:37 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:19.009 10:18:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:19.009 10:18:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:19.009 10:18:37 -- common/autotest_common.sh@10 -- # set +x 00:18:19.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.009 10:18:37 -- nvmf/common.sh@469 -- # nvmfpid=89527 00:18:19.009 10:18:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:19.009 10:18:37 -- nvmf/common.sh@470 -- # waitforlisten 89527 00:18:19.009 10:18:37 -- common/autotest_common.sh@829 -- # '[' -z 89527 ']' 00:18:19.009 10:18:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.009 10:18:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.009 10:18:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.009 10:18:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.009 10:18:37 -- common/autotest_common.sh@10 -- # set +x 00:18:19.009 [2024-11-19 10:18:37.749486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:19.009 [2024-11-19 10:18:37.749767] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.009 [2024-11-19 10:18:37.899981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.009 [2024-11-19 10:18:37.944657] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:19.009 [2024-11-19 10:18:37.944988] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.009 [2024-11-19 10:18:37.945010] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.009 [2024-11-19 10:18:37.945023] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.009 [2024-11-19 10:18:37.945058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.267 10:18:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:19.267 10:18:38 -- common/autotest_common.sh@862 -- # return 0 00:18:19.267 10:18:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:19.267 10:18:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:19.267 10:18:38 -- common/autotest_common.sh@10 -- # set +x 00:18:19.267 10:18:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.267 10:18:38 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:19.267 10:18:38 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:19.267 10:18:38 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:19.267 10:18:38 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:19.267 10:18:38 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:19.267 10:18:38 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:19.267 10:18:38 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:19.267 10:18:38 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:19.832 [2024-11-19 10:18:39.072757] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.832 [2024-11-19 10:18:39.088713] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:19.832 [2024-11-19 10:18:39.088947] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.832 malloc0 00:18:19.832 10:18:39 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:19.832 10:18:39 -- fips/fips.sh@147 -- # bdevperf_pid=89584 00:18:19.832 10:18:39 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:19.832 10:18:39 -- fips/fips.sh@148 -- # waitforlisten 89584 /var/tmp/bdevperf.sock 00:18:19.832 10:18:39 -- common/autotest_common.sh@829 -- # '[' -z 89584 ']' 00:18:19.832 10:18:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.832 10:18:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.832 10:18:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.832 10:18:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.832 10:18:39 -- common/autotest_common.sh@10 -- # set +x 00:18:19.832 [2024-11-19 10:18:39.214089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:19.832 [2024-11-19 10:18:39.214408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89584 ] 00:18:19.832 [2024-11-19 10:18:39.351455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.090 [2024-11-19 10:18:39.389103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.025 10:18:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.025 10:18:40 -- common/autotest_common.sh@862 -- # return 0 00:18:21.025 10:18:40 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:21.025 [2024-11-19 10:18:40.484890] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:21.025 TLSTESTn1 00:18:21.283 10:18:40 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:21.283 Running I/O for 10 seconds... 00:18:31.253 00:18:31.253 Latency(us) 00:18:31.253 [2024-11-19T10:18:50.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.253 [2024-11-19T10:18:50.799Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:31.253 Verification LBA range: start 0x0 length 0x2000 00:18:31.253 TLSTESTn1 : 10.02 5189.51 20.27 0.00 0.00 24625.71 5302.46 30980.65 00:18:31.253 [2024-11-19T10:18:50.799Z] =================================================================================================================== 00:18:31.253 [2024-11-19T10:18:50.799Z] Total : 5189.51 20.27 0.00 0.00 24625.71 5302.46 30980.65 00:18:31.253 0 00:18:31.253 10:18:50 -- fips/fips.sh@1 -- # cleanup 00:18:31.253 10:18:50 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:31.253 10:18:50 -- common/autotest_common.sh@806 -- # type=--id 00:18:31.253 10:18:50 -- common/autotest_common.sh@807 -- # id=0 00:18:31.253 10:18:50 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:31.253 10:18:50 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:31.253 10:18:50 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:31.253 10:18:50 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:31.253 10:18:50 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:31.253 10:18:50 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:31.253 nvmf_trace.0 00:18:31.512 10:18:50 -- common/autotest_common.sh@821 -- # return 0 00:18:31.512 10:18:50 -- fips/fips.sh@16 -- # killprocess 89584 00:18:31.512 10:18:50 -- common/autotest_common.sh@936 -- # '[' -z 89584 ']' 00:18:31.512 10:18:50 -- common/autotest_common.sh@940 -- # kill -0 89584 00:18:31.512 10:18:50 -- common/autotest_common.sh@941 -- # uname 00:18:31.512 10:18:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:31.512 10:18:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89584 00:18:31.512 10:18:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:31.512 10:18:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:31.512 killing process with pid 89584 00:18:31.512 10:18:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89584' 00:18:31.512 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.512 00:18:31.512 Latency(us) 00:18:31.512 [2024-11-19T10:18:51.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.512 [2024-11-19T10:18:51.058Z] =================================================================================================================== 00:18:31.512 [2024-11-19T10:18:51.058Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.512 10:18:50 -- common/autotest_common.sh@955 -- # kill 89584 00:18:31.512 10:18:50 -- common/autotest_common.sh@960 -- # wait 89584 00:18:31.512 10:18:50 -- fips/fips.sh@17 -- # nvmftestfini 00:18:31.512 10:18:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:31.512 10:18:51 -- nvmf/common.sh@116 -- # sync 00:18:31.512 10:18:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:31.512 10:18:51 -- nvmf/common.sh@119 -- # set +e 00:18:31.512 10:18:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:31.512 10:18:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:31.512 rmmod nvme_tcp 00:18:31.770 rmmod nvme_fabrics 00:18:31.770 rmmod nvme_keyring 00:18:31.770 10:18:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:31.770 10:18:51 -- nvmf/common.sh@123 -- # set -e 00:18:31.770 10:18:51 -- nvmf/common.sh@124 -- # return 0 00:18:31.770 10:18:51 -- nvmf/common.sh@477 -- # '[' -n 89527 ']' 00:18:31.770 10:18:51 -- nvmf/common.sh@478 -- # killprocess 89527 00:18:31.770 10:18:51 -- common/autotest_common.sh@936 -- # '[' -z 89527 ']' 00:18:31.770 10:18:51 -- common/autotest_common.sh@940 -- # kill -0 89527 00:18:31.770 10:18:51 -- common/autotest_common.sh@941 -- # uname 00:18:31.770 10:18:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:31.770 10:18:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89527 00:18:31.770 10:18:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:31.770 10:18:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:31.770 killing process with pid 89527 00:18:31.770 10:18:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89527' 00:18:31.770 10:18:51 -- common/autotest_common.sh@955 -- # kill 89527 00:18:31.770 10:18:51 -- common/autotest_common.sh@960 -- # wait 89527 00:18:31.770 10:18:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:31.770 10:18:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:31.770 10:18:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:31.770 10:18:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.770 10:18:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:31.770 10:18:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.770 10:18:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.770 10:18:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.770 10:18:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:31.770 10:18:51 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:31.770 00:18:31.770 real 0m14.377s 00:18:31.770 user 0m19.472s 00:18:31.770 sys 0m5.868s 00:18:31.770 10:18:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:31.770 10:18:51 -- common/autotest_common.sh@10 -- # set +x 00:18:31.770 ************************************ 00:18:31.770 END TEST nvmf_fips 00:18:31.770 ************************************ 00:18:32.030 10:18:51 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:32.030 10:18:51 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:32.030 10:18:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:32.030 10:18:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:32.030 10:18:51 -- common/autotest_common.sh@10 -- # set +x 00:18:32.030 ************************************ 00:18:32.030 START TEST nvmf_fuzz 00:18:32.030 ************************************ 00:18:32.030 10:18:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:32.030 * Looking for test storage... 00:18:32.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:32.030 10:18:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:32.030 10:18:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:32.030 10:18:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:32.030 10:18:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:32.030 10:18:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:32.030 10:18:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:32.030 10:18:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:32.030 10:18:51 -- scripts/common.sh@335 -- # IFS=.-: 00:18:32.030 10:18:51 -- scripts/common.sh@335 -- # read -ra ver1 00:18:32.030 10:18:51 -- scripts/common.sh@336 -- # IFS=.-: 00:18:32.030 10:18:51 -- scripts/common.sh@336 -- # read -ra ver2 00:18:32.030 10:18:51 -- scripts/common.sh@337 -- # local 'op=<' 00:18:32.030 10:18:51 -- scripts/common.sh@339 -- # ver1_l=2 00:18:32.030 10:18:51 -- scripts/common.sh@340 -- # ver2_l=1 00:18:32.030 10:18:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:32.030 10:18:51 -- scripts/common.sh@343 -- # case "$op" in 00:18:32.030 10:18:51 -- scripts/common.sh@344 -- # : 1 00:18:32.030 10:18:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:32.030 10:18:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:32.030 10:18:51 -- scripts/common.sh@364 -- # decimal 1 00:18:32.030 10:18:51 -- scripts/common.sh@352 -- # local d=1 00:18:32.030 10:18:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:32.030 10:18:51 -- scripts/common.sh@354 -- # echo 1 00:18:32.030 10:18:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:32.030 10:18:51 -- scripts/common.sh@365 -- # decimal 2 00:18:32.030 10:18:51 -- scripts/common.sh@352 -- # local d=2 00:18:32.030 10:18:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:32.030 10:18:51 -- scripts/common.sh@354 -- # echo 2 00:18:32.030 10:18:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:32.030 10:18:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:32.030 10:18:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:32.030 10:18:51 -- scripts/common.sh@367 -- # return 0 00:18:32.030 10:18:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:32.030 10:18:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:32.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.030 --rc genhtml_branch_coverage=1 00:18:32.030 --rc genhtml_function_coverage=1 00:18:32.030 --rc genhtml_legend=1 00:18:32.030 --rc geninfo_all_blocks=1 00:18:32.030 --rc geninfo_unexecuted_blocks=1 00:18:32.030 00:18:32.030 ' 00:18:32.030 10:18:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:32.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.030 --rc genhtml_branch_coverage=1 00:18:32.030 --rc genhtml_function_coverage=1 00:18:32.030 --rc genhtml_legend=1 00:18:32.030 --rc geninfo_all_blocks=1 00:18:32.030 --rc geninfo_unexecuted_blocks=1 00:18:32.030 00:18:32.030 ' 00:18:32.030 10:18:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:32.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.030 --rc genhtml_branch_coverage=1 00:18:32.030 --rc genhtml_function_coverage=1 00:18:32.030 --rc genhtml_legend=1 00:18:32.030 --rc geninfo_all_blocks=1 00:18:32.030 --rc geninfo_unexecuted_blocks=1 00:18:32.030 00:18:32.030 ' 00:18:32.030 10:18:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:32.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:32.030 --rc genhtml_branch_coverage=1 00:18:32.030 --rc genhtml_function_coverage=1 00:18:32.030 --rc genhtml_legend=1 00:18:32.030 --rc geninfo_all_blocks=1 00:18:32.030 --rc geninfo_unexecuted_blocks=1 00:18:32.030 00:18:32.030 ' 00:18:32.030 10:18:51 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:32.030 10:18:51 -- nvmf/common.sh@7 -- # uname -s 00:18:32.030 10:18:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.030 10:18:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.030 10:18:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.030 10:18:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.030 10:18:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.030 10:18:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.030 10:18:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.030 10:18:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.030 10:18:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.030 10:18:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.030 10:18:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:18:32.030 10:18:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:18:32.030 10:18:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.030 10:18:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.030 10:18:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:32.030 10:18:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:32.030 10:18:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.030 10:18:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.030 10:18:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.030 10:18:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.030 10:18:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.030 10:18:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.030 10:18:51 -- paths/export.sh@5 -- # export PATH 00:18:32.030 10:18:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.030 10:18:51 -- nvmf/common.sh@46 -- # : 0 00:18:32.030 10:18:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:32.030 10:18:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:32.030 10:18:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:32.030 10:18:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.030 10:18:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.030 10:18:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:32.030 10:18:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:32.030 10:18:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:32.030 10:18:51 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:32.030 10:18:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:32.031 10:18:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.031 10:18:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:32.031 10:18:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:32.031 10:18:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:32.031 10:18:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.031 10:18:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:32.031 10:18:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.031 10:18:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:32.031 10:18:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:32.031 10:18:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:32.031 10:18:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:32.031 10:18:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:32.031 10:18:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:32.031 10:18:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.031 10:18:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.031 10:18:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:32.031 10:18:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:32.031 10:18:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:32.031 10:18:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:32.031 10:18:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:32.031 10:18:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.031 10:18:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:32.031 10:18:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:32.031 10:18:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:32.031 10:18:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:32.031 10:18:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:32.031 10:18:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:32.031 Cannot find device "nvmf_tgt_br" 00:18:32.031 10:18:51 -- nvmf/common.sh@154 -- # true 00:18:32.031 10:18:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:32.031 Cannot find device "nvmf_tgt_br2" 00:18:32.031 10:18:51 -- nvmf/common.sh@155 -- # true 00:18:32.031 10:18:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:32.290 10:18:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:32.290 Cannot find device "nvmf_tgt_br" 00:18:32.290 10:18:51 -- nvmf/common.sh@157 -- # true 00:18:32.290 10:18:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:32.290 Cannot find device "nvmf_tgt_br2" 00:18:32.290 10:18:51 -- nvmf/common.sh@158 -- # true 00:18:32.290 10:18:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:32.290 10:18:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:32.290 10:18:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:32.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:32.290 10:18:51 -- nvmf/common.sh@161 -- # true 00:18:32.290 10:18:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:32.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:32.290 10:18:51 -- nvmf/common.sh@162 -- # true 00:18:32.290 10:18:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:32.290 10:18:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:32.290 10:18:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:32.290 10:18:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:32.290 10:18:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:32.290 10:18:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:32.290 10:18:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:32.290 10:18:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:32.290 10:18:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:32.290 10:18:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:32.290 10:18:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:32.290 10:18:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:32.290 10:18:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:32.290 10:18:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:32.290 10:18:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:32.290 10:18:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:32.290 10:18:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:32.290 10:18:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:32.290 10:18:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:32.290 10:18:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:32.290 10:18:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:32.290 10:18:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:32.549 10:18:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:32.549 10:18:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:32.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:18:32.549 00:18:32.549 --- 10.0.0.2 ping statistics --- 00:18:32.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.549 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:18:32.549 10:18:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:32.549 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:32.549 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:18:32.549 00:18:32.549 --- 10.0.0.3 ping statistics --- 00:18:32.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.549 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:32.549 10:18:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:32.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:32.549 00:18:32.549 --- 10.0.0.1 ping statistics --- 00:18:32.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.549 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:32.549 10:18:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.549 10:18:51 -- nvmf/common.sh@421 -- # return 0 00:18:32.549 10:18:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:32.549 10:18:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.549 10:18:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:32.549 10:18:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:32.549 10:18:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.549 10:18:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:32.549 10:18:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:32.549 10:18:51 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=89932 00:18:32.549 10:18:51 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:32.549 10:18:51 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:32.549 10:18:51 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 89932 00:18:32.549 10:18:51 -- common/autotest_common.sh@829 -- # '[' -z 89932 ']' 00:18:32.549 10:18:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.549 10:18:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.549 10:18:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.549 10:18:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.549 10:18:51 -- common/autotest_common.sh@10 -- # set +x 00:18:32.807 10:18:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.807 10:18:52 -- common/autotest_common.sh@862 -- # return 0 00:18:32.807 10:18:52 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:32.807 10:18:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.807 10:18:52 -- common/autotest_common.sh@10 -- # set +x 00:18:32.807 10:18:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.807 10:18:52 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:32.807 10:18:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.807 10:18:52 -- common/autotest_common.sh@10 -- # set +x 00:18:32.807 Malloc0 00:18:32.807 10:18:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.807 10:18:52 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:32.807 10:18:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.807 10:18:52 -- common/autotest_common.sh@10 -- # set +x 00:18:32.807 10:18:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.808 10:18:52 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:32.808 10:18:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.808 10:18:52 -- common/autotest_common.sh@10 -- # set +x 00:18:32.808 10:18:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.808 10:18:52 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:32.808 10:18:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.808 10:18:52 -- common/autotest_common.sh@10 -- # set +x 00:18:32.808 10:18:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.808 10:18:52 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:32.808 10:18:52 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:33.066 Shutting down the fuzz application 00:18:33.066 10:18:52 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:33.324 Shutting down the fuzz application 00:18:33.324 10:18:52 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:33.324 10:18:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.324 10:18:52 -- common/autotest_common.sh@10 -- # set +x 00:18:33.324 10:18:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.324 10:18:52 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:33.324 10:18:52 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:33.324 10:18:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:33.324 10:18:52 -- nvmf/common.sh@116 -- # sync 00:18:33.324 10:18:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:33.324 10:18:52 -- nvmf/common.sh@119 -- # set +e 00:18:33.324 10:18:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:33.324 10:18:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:33.324 rmmod nvme_tcp 00:18:33.582 rmmod nvme_fabrics 00:18:33.582 rmmod nvme_keyring 00:18:33.582 10:18:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:33.582 10:18:52 -- nvmf/common.sh@123 -- # set -e 00:18:33.582 10:18:52 -- nvmf/common.sh@124 -- # return 0 00:18:33.582 10:18:52 -- nvmf/common.sh@477 -- # '[' -n 89932 ']' 00:18:33.582 10:18:52 -- nvmf/common.sh@478 -- # killprocess 89932 00:18:33.582 10:18:52 -- common/autotest_common.sh@936 -- # '[' -z 89932 ']' 00:18:33.582 10:18:52 -- common/autotest_common.sh@940 -- # kill -0 89932 00:18:33.582 10:18:52 -- common/autotest_common.sh@941 -- # uname 00:18:33.582 10:18:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:33.582 10:18:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89932 00:18:33.582 10:18:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:33.582 10:18:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:33.582 killing process with pid 89932 00:18:33.582 10:18:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89932' 00:18:33.582 10:18:52 -- common/autotest_common.sh@955 -- # kill 89932 00:18:33.582 10:18:52 -- common/autotest_common.sh@960 -- # wait 89932 00:18:33.582 10:18:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:33.582 10:18:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:33.582 10:18:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:33.582 10:18:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:33.583 10:18:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:33.583 10:18:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.583 10:18:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.583 10:18:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.841 10:18:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:33.842 10:18:53 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:33.842 00:18:33.842 real 0m1.794s 00:18:33.842 user 0m1.706s 00:18:33.842 sys 0m0.550s 00:18:33.842 10:18:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:33.842 ************************************ 00:18:33.842 END TEST nvmf_fuzz 00:18:33.842 ************************************ 00:18:33.842 10:18:53 -- common/autotest_common.sh@10 -- # set +x 00:18:33.842 10:18:53 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:33.842 10:18:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:33.842 10:18:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:33.842 10:18:53 -- common/autotest_common.sh@10 -- # set +x 00:18:33.842 ************************************ 00:18:33.842 START TEST nvmf_multiconnection 00:18:33.842 ************************************ 00:18:33.842 10:18:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:33.842 * Looking for test storage... 00:18:33.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:33.842 10:18:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:33.842 10:18:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:33.842 10:18:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:33.842 10:18:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:33.842 10:18:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:33.842 10:18:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:33.842 10:18:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:33.842 10:18:53 -- scripts/common.sh@335 -- # IFS=.-: 00:18:33.842 10:18:53 -- scripts/common.sh@335 -- # read -ra ver1 00:18:33.842 10:18:53 -- scripts/common.sh@336 -- # IFS=.-: 00:18:33.842 10:18:53 -- scripts/common.sh@336 -- # read -ra ver2 00:18:33.842 10:18:53 -- scripts/common.sh@337 -- # local 'op=<' 00:18:33.842 10:18:53 -- scripts/common.sh@339 -- # ver1_l=2 00:18:33.842 10:18:53 -- scripts/common.sh@340 -- # ver2_l=1 00:18:33.842 10:18:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:33.842 10:18:53 -- scripts/common.sh@343 -- # case "$op" in 00:18:33.842 10:18:53 -- scripts/common.sh@344 -- # : 1 00:18:33.842 10:18:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:33.842 10:18:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:33.842 10:18:53 -- scripts/common.sh@364 -- # decimal 1 00:18:33.842 10:18:53 -- scripts/common.sh@352 -- # local d=1 00:18:33.842 10:18:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:33.842 10:18:53 -- scripts/common.sh@354 -- # echo 1 00:18:33.842 10:18:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:33.842 10:18:53 -- scripts/common.sh@365 -- # decimal 2 00:18:33.842 10:18:53 -- scripts/common.sh@352 -- # local d=2 00:18:33.842 10:18:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:33.842 10:18:53 -- scripts/common.sh@354 -- # echo 2 00:18:33.842 10:18:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:33.842 10:18:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:33.842 10:18:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:33.842 10:18:53 -- scripts/common.sh@367 -- # return 0 00:18:33.842 10:18:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:33.842 10:18:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:33.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.842 --rc genhtml_branch_coverage=1 00:18:33.842 --rc genhtml_function_coverage=1 00:18:33.842 --rc genhtml_legend=1 00:18:33.842 --rc geninfo_all_blocks=1 00:18:33.842 --rc geninfo_unexecuted_blocks=1 00:18:33.842 00:18:33.842 ' 00:18:33.842 10:18:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:33.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.842 --rc genhtml_branch_coverage=1 00:18:33.842 --rc genhtml_function_coverage=1 00:18:33.842 --rc genhtml_legend=1 00:18:33.842 --rc geninfo_all_blocks=1 00:18:33.842 --rc geninfo_unexecuted_blocks=1 00:18:33.842 00:18:33.842 ' 00:18:33.842 10:18:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:33.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.842 --rc genhtml_branch_coverage=1 00:18:33.842 --rc genhtml_function_coverage=1 00:18:33.842 --rc genhtml_legend=1 00:18:33.842 --rc geninfo_all_blocks=1 00:18:33.842 --rc geninfo_unexecuted_blocks=1 00:18:33.842 00:18:33.842 ' 00:18:33.842 10:18:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:33.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.842 --rc genhtml_branch_coverage=1 00:18:33.842 --rc genhtml_function_coverage=1 00:18:33.842 --rc genhtml_legend=1 00:18:33.842 --rc geninfo_all_blocks=1 00:18:33.842 --rc geninfo_unexecuted_blocks=1 00:18:33.842 00:18:33.842 ' 00:18:33.842 10:18:53 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:33.842 10:18:53 -- nvmf/common.sh@7 -- # uname -s 00:18:33.842 10:18:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.842 10:18:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.842 10:18:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.842 10:18:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.842 10:18:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.842 10:18:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.842 10:18:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.842 10:18:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.842 10:18:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.842 10:18:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.842 10:18:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:18:33.842 10:18:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:18:33.842 10:18:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.842 10:18:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.842 10:18:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:33.842 10:18:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:33.842 10:18:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.842 10:18:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.842 10:18:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.842 10:18:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.842 10:18:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.842 10:18:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.842 10:18:53 -- paths/export.sh@5 -- # export PATH 00:18:33.842 10:18:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.842 10:18:53 -- nvmf/common.sh@46 -- # : 0 00:18:33.842 10:18:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:33.842 10:18:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:33.842 10:18:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:33.842 10:18:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.842 10:18:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.842 10:18:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:33.842 10:18:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:33.842 10:18:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:33.842 10:18:53 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:33.842 10:18:53 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:33.842 10:18:53 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:33.842 10:18:53 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:33.842 10:18:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:34.101 10:18:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.101 10:18:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:34.101 10:18:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:34.101 10:18:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:34.101 10:18:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.101 10:18:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:34.101 10:18:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.101 10:18:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:34.101 10:18:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:34.101 10:18:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:34.101 10:18:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:34.101 10:18:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:34.101 10:18:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:34.101 10:18:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.101 10:18:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:34.101 10:18:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:34.101 10:18:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:34.101 10:18:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:34.101 10:18:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:34.101 10:18:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:34.101 10:18:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.101 10:18:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:34.101 10:18:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:34.101 10:18:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:34.101 10:18:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:34.101 10:18:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:34.101 10:18:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:34.101 Cannot find device "nvmf_tgt_br" 00:18:34.101 10:18:53 -- nvmf/common.sh@154 -- # true 00:18:34.101 10:18:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.101 Cannot find device "nvmf_tgt_br2" 00:18:34.101 10:18:53 -- nvmf/common.sh@155 -- # true 00:18:34.101 10:18:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:34.101 10:18:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:34.101 Cannot find device "nvmf_tgt_br" 00:18:34.101 10:18:53 -- nvmf/common.sh@157 -- # true 00:18:34.101 10:18:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:34.101 Cannot find device "nvmf_tgt_br2" 00:18:34.101 10:18:53 -- nvmf/common.sh@158 -- # true 00:18:34.101 10:18:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:34.101 10:18:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:34.101 10:18:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:34.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.101 10:18:53 -- nvmf/common.sh@161 -- # true 00:18:34.101 10:18:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:34.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.101 10:18:53 -- nvmf/common.sh@162 -- # true 00:18:34.101 10:18:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:34.101 10:18:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:34.101 10:18:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:34.101 10:18:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:34.101 10:18:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:34.101 10:18:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:34.101 10:18:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:34.101 10:18:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:34.101 10:18:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:34.101 10:18:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:34.101 10:18:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:34.101 10:18:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:34.101 10:18:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:34.101 10:18:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:34.101 10:18:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:34.101 10:18:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:34.101 10:18:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:34.101 10:18:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:34.101 10:18:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:34.361 10:18:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:34.361 10:18:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:34.361 10:18:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:34.361 10:18:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:34.361 10:18:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:34.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:18:34.361 00:18:34.361 --- 10.0.0.2 ping statistics --- 00:18:34.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.361 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:34.361 10:18:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:34.361 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:34.361 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:18:34.361 00:18:34.361 --- 10.0.0.3 ping statistics --- 00:18:34.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.361 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:34.361 10:18:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:34.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:18:34.361 00:18:34.361 --- 10.0.0.1 ping statistics --- 00:18:34.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.361 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:18:34.361 10:18:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.361 10:18:53 -- nvmf/common.sh@421 -- # return 0 00:18:34.361 10:18:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:34.361 10:18:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.361 10:18:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:34.361 10:18:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:34.361 10:18:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.361 10:18:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:34.361 10:18:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:34.361 10:18:53 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:34.361 10:18:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:34.361 10:18:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:34.361 10:18:53 -- common/autotest_common.sh@10 -- # set +x 00:18:34.361 10:18:53 -- nvmf/common.sh@469 -- # nvmfpid=90132 00:18:34.361 10:18:53 -- nvmf/common.sh@470 -- # waitforlisten 90132 00:18:34.361 10:18:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:34.361 10:18:53 -- common/autotest_common.sh@829 -- # '[' -z 90132 ']' 00:18:34.361 10:18:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.361 10:18:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.361 10:18:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.361 10:18:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.361 10:18:53 -- common/autotest_common.sh@10 -- # set +x 00:18:34.361 [2024-11-19 10:18:53.775924] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:34.361 [2024-11-19 10:18:53.776457] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.619 [2024-11-19 10:18:53.927866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:34.619 [2024-11-19 10:18:53.976429] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:34.619 [2024-11-19 10:18:53.976581] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.619 [2024-11-19 10:18:53.976594] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.619 [2024-11-19 10:18:53.976603] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.619 [2024-11-19 10:18:53.976781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.619 [2024-11-19 10:18:53.976905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.619 [2024-11-19 10:18:53.977656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:34.619 [2024-11-19 10:18:53.977669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.663 10:18:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:35.663 10:18:54 -- common/autotest_common.sh@862 -- # return 0 00:18:35.663 10:18:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:35.663 10:18:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:35.663 10:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 10:18:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.663 10:18:54 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:35.663 10:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 [2024-11-19 10:18:54.920263] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.663 10:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:54 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:35.663 10:18:54 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:35.663 10:18:54 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:35.663 10:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 Malloc1 00:18:35.663 10:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:54 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:35.663 10:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 10:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:54 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:35.663 10:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 10:18:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:54 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.663 10:18:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 [2024-11-19 10:18:54.998874] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:35.663 10:18:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 Malloc2 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:35.663 10:18:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 Malloc3 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:35.663 10:18:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 Malloc4 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:35.663 10:18:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 Malloc5 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.663 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.663 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.663 10:18:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:35.663 10:18:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:35.663 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.664 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.664 Malloc6 00:18:35.664 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.664 10:18:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:35.664 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.664 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:35.923 10:18:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 Malloc7 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:35.923 10:18:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 Malloc8 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:35.923 10:18:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 Malloc9 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:35.923 10:18:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 Malloc10 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:35.923 10:18:55 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 Malloc11 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.923 10:18:55 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:35.923 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.923 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.923 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.182 10:18:55 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:36.182 10:18:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.182 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:18:36.182 10:18:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.182 10:18:55 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:36.182 10:18:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.182 10:18:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:36.182 10:18:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:36.182 10:18:55 -- common/autotest_common.sh@1187 -- # local i=0 00:18:36.182 10:18:55 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:36.182 10:18:55 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:36.182 10:18:55 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:38.711 10:18:57 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:38.711 10:18:57 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:38.711 10:18:57 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:18:38.711 10:18:57 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:38.711 10:18:57 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:38.711 10:18:57 -- common/autotest_common.sh@1197 -- # return 0 00:18:38.711 10:18:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:38.711 10:18:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:38.711 10:18:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:38.711 10:18:57 -- common/autotest_common.sh@1187 -- # local i=0 00:18:38.711 10:18:57 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:38.711 10:18:57 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:38.711 10:18:57 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:40.612 10:18:59 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:40.612 10:18:59 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:18:40.612 10:18:59 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:40.612 10:18:59 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:40.612 10:18:59 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:40.612 10:18:59 -- common/autotest_common.sh@1197 -- # return 0 00:18:40.612 10:18:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.612 10:18:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:40.612 10:19:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:40.612 10:19:00 -- common/autotest_common.sh@1187 -- # local i=0 00:18:40.612 10:19:00 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:40.612 10:19:00 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:40.612 10:19:00 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:42.515 10:19:02 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:42.515 10:19:02 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:42.515 10:19:02 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:18:42.515 10:19:02 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:42.515 10:19:02 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:42.515 10:19:02 -- common/autotest_common.sh@1197 -- # return 0 00:18:42.515 10:19:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.515 10:19:02 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:42.773 10:19:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:42.773 10:19:02 -- common/autotest_common.sh@1187 -- # local i=0 00:18:42.773 10:19:02 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:42.773 10:19:02 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:42.773 10:19:02 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:44.675 10:19:04 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:44.675 10:19:04 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:44.675 10:19:04 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:18:44.933 10:19:04 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:44.933 10:19:04 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:44.933 10:19:04 -- common/autotest_common.sh@1197 -- # return 0 00:18:44.933 10:19:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:44.934 10:19:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:44.934 10:19:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:44.934 10:19:04 -- common/autotest_common.sh@1187 -- # local i=0 00:18:44.934 10:19:04 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:44.934 10:19:04 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:44.934 10:19:04 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:46.874 10:19:06 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:46.874 10:19:06 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:46.874 10:19:06 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:18:47.131 10:19:06 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:47.131 10:19:06 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:47.131 10:19:06 -- common/autotest_common.sh@1197 -- # return 0 00:18:47.131 10:19:06 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.131 10:19:06 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:47.131 10:19:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:47.131 10:19:06 -- common/autotest_common.sh@1187 -- # local i=0 00:18:47.131 10:19:06 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.131 10:19:06 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:47.131 10:19:06 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:49.659 10:19:08 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:49.659 10:19:08 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:49.659 10:19:08 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:18:49.659 10:19:08 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:49.659 10:19:08 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.659 10:19:08 -- common/autotest_common.sh@1197 -- # return 0 00:18:49.659 10:19:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.659 10:19:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:49.659 10:19:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:49.659 10:19:08 -- common/autotest_common.sh@1187 -- # local i=0 00:18:49.659 10:19:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:49.659 10:19:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:49.659 10:19:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:51.559 10:19:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:51.559 10:19:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:51.559 10:19:10 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:18:51.559 10:19:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:51.559 10:19:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.559 10:19:10 -- common/autotest_common.sh@1197 -- # return 0 00:18:51.559 10:19:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:51.559 10:19:10 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:51.559 10:19:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:51.559 10:19:10 -- common/autotest_common.sh@1187 -- # local i=0 00:18:51.559 10:19:10 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.559 10:19:10 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:51.559 10:19:10 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:53.458 10:19:12 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:53.458 10:19:12 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:53.458 10:19:12 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:18:53.717 10:19:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:53.717 10:19:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.717 10:19:13 -- common/autotest_common.sh@1197 -- # return 0 00:18:53.717 10:19:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.717 10:19:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:53.717 10:19:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:53.717 10:19:13 -- common/autotest_common.sh@1187 -- # local i=0 00:18:53.717 10:19:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:53.717 10:19:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:53.717 10:19:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:56.247 10:19:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:56.247 10:19:15 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:18:56.247 10:19:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:56.247 10:19:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:56.247 10:19:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:56.247 10:19:15 -- common/autotest_common.sh@1197 -- # return 0 00:18:56.247 10:19:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:56.247 10:19:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:56.247 10:19:15 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:56.247 10:19:15 -- common/autotest_common.sh@1187 -- # local i=0 00:18:56.247 10:19:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:56.247 10:19:15 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:56.247 10:19:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:18:58.175 10:19:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:18:58.175 10:19:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:18:58.175 10:19:17 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:18:58.175 10:19:17 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:18:58.175 10:19:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:18:58.175 10:19:17 -- common/autotest_common.sh@1197 -- # return 0 00:18:58.175 10:19:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.175 10:19:17 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:58.175 10:19:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:58.175 10:19:17 -- common/autotest_common.sh@1187 -- # local i=0 00:18:58.175 10:19:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:18:58.175 10:19:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:18:58.175 10:19:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:00.078 10:19:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:00.078 10:19:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:00.078 10:19:19 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:19:00.078 10:19:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:00.078 10:19:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:00.078 10:19:19 -- common/autotest_common.sh@1197 -- # return 0 00:19:00.078 10:19:19 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:00.337 [global] 00:19:00.337 thread=1 00:19:00.337 invalidate=1 00:19:00.337 rw=read 00:19:00.337 time_based=1 00:19:00.337 runtime=10 00:19:00.337 ioengine=libaio 00:19:00.337 direct=1 00:19:00.337 bs=262144 00:19:00.337 iodepth=64 00:19:00.337 norandommap=1 00:19:00.337 numjobs=1 00:19:00.337 00:19:00.337 [job0] 00:19:00.337 filename=/dev/nvme0n1 00:19:00.337 [job1] 00:19:00.337 filename=/dev/nvme10n1 00:19:00.337 [job2] 00:19:00.337 filename=/dev/nvme1n1 00:19:00.337 [job3] 00:19:00.337 filename=/dev/nvme2n1 00:19:00.337 [job4] 00:19:00.337 filename=/dev/nvme3n1 00:19:00.337 [job5] 00:19:00.337 filename=/dev/nvme4n1 00:19:00.337 [job6] 00:19:00.337 filename=/dev/nvme5n1 00:19:00.337 [job7] 00:19:00.337 filename=/dev/nvme6n1 00:19:00.337 [job8] 00:19:00.337 filename=/dev/nvme7n1 00:19:00.337 [job9] 00:19:00.337 filename=/dev/nvme8n1 00:19:00.337 [job10] 00:19:00.337 filename=/dev/nvme9n1 00:19:00.337 Could not set queue depth (nvme0n1) 00:19:00.337 Could not set queue depth (nvme10n1) 00:19:00.337 Could not set queue depth (nvme1n1) 00:19:00.337 Could not set queue depth (nvme2n1) 00:19:00.337 Could not set queue depth (nvme3n1) 00:19:00.337 Could not set queue depth (nvme4n1) 00:19:00.337 Could not set queue depth (nvme5n1) 00:19:00.337 Could not set queue depth (nvme6n1) 00:19:00.337 Could not set queue depth (nvme7n1) 00:19:00.337 Could not set queue depth (nvme8n1) 00:19:00.337 Could not set queue depth (nvme9n1) 00:19:00.337 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:00.337 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:00.337 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:00.337 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:00.337 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:00.337 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:00.337 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:00.337 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:00.337 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:00.337 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:00.337 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:00.337 fio-3.35 00:19:00.337 Starting 11 threads 00:19:12.588 00:19:12.588 job0: (groupid=0, jobs=1): err= 0: pid=90610: Tue Nov 19 10:19:30 2024 00:19:12.588 read: IOPS=298, BW=74.5MiB/s (78.2MB/s)(758MiB/10172msec) 00:19:12.588 slat (usec): min=16, max=135379, avg=3296.53, stdev=12865.97 00:19:12.588 clat (msec): min=37, max=396, avg=211.07, stdev=38.78 00:19:12.588 lat (msec): min=38, max=396, avg=214.37, stdev=41.27 00:19:12.588 clat percentiles (msec): 00:19:12.588 | 1.00th=[ 66], 5.00th=[ 140], 10.00th=[ 159], 20.00th=[ 182], 00:19:12.588 | 30.00th=[ 203], 40.00th=[ 213], 50.00th=[ 220], 60.00th=[ 226], 00:19:12.588 | 70.00th=[ 230], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 259], 00:19:12.588 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 309], 99.95th=[ 397], 00:19:12.588 | 99.99th=[ 397] 00:19:12.588 bw ( KiB/s): min=60416, max=104960, per=4.26%, avg=75984.65, stdev=12414.45, samples=20 00:19:12.588 iops : min= 236, max= 410, avg=296.75, stdev=48.52, samples=20 00:19:12.588 lat (msec) : 50=0.16%, 100=1.65%, 250=88.79%, 500=9.40% 00:19:12.588 cpu : usr=0.12%, sys=1.08%, ctx=571, majf=0, minf=4097 00:19:12.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:19:12.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:12.589 issued rwts: total=3033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.589 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:12.589 job1: (groupid=0, jobs=1): err= 0: pid=90611: Tue Nov 19 10:19:30 2024 00:19:12.589 read: IOPS=1199, BW=300MiB/s (314MB/s)(3011MiB/10041msec) 00:19:12.589 slat (usec): min=10, max=147417, avg=803.54, stdev=3917.97 00:19:12.589 clat (usec): min=448, max=287080, avg=52477.04, stdev=33036.94 00:19:12.589 lat (usec): min=499, max=377985, avg=53280.58, stdev=33606.77 00:19:12.589 clat percentiles (msec): 00:19:12.589 | 1.00th=[ 11], 5.00th=[ 21], 10.00th=[ 25], 20.00th=[ 30], 00:19:12.589 | 30.00th=[ 36], 40.00th=[ 45], 50.00th=[ 52], 60.00th=[ 57], 00:19:12.589 | 70.00th=[ 62], 80.00th=[ 66], 90.00th=[ 71], 95.00th=[ 78], 00:19:12.589 | 99.00th=[ 234], 99.50th=[ 241], 99.90th=[ 288], 99.95th=[ 288], 00:19:12.589 | 99.99th=[ 288] 00:19:12.589 bw ( KiB/s): min=77312, max=547768, per=17.17%, avg=306628.05, stdev=121976.28, samples=20 00:19:12.589 iops : min= 302, max= 2139, avg=1197.65, stdev=476.43, samples=20 00:19:12.589 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.26% 00:19:12.589 lat (msec) : 2=0.07%, 4=0.04%, 10=0.32%, 20=4.00%, 50=41.92% 00:19:12.589 lat (msec) : 100=50.92%, 250=2.06%, 500=0.35% 00:19:12.589 cpu : usr=0.34%, sys=3.81%, ctx=2908, majf=0, minf=4097 00:19:12.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:12.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:12.589 issued rwts: total=12044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.589 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:12.589 job2: (groupid=0, jobs=1): err= 0: pid=90612: Tue Nov 19 10:19:30 2024 00:19:12.589 read: IOPS=1782, BW=446MiB/s (467MB/s)(4465MiB/10019msec) 00:19:12.589 slat (usec): min=10, max=67242, avg=546.84, stdev=2559.85 00:19:12.589 clat (msec): min=11, max=166, avg=35.32, stdev=18.12 00:19:12.589 lat (msec): min=11, max=185, avg=35.87, stdev=18.44 00:19:12.589 clat percentiles (msec): 00:19:12.589 | 1.00th=[ 18], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 25], 00:19:12.589 | 30.00th=[ 26], 40.00th=[ 27], 50.00th=[ 30], 60.00th=[ 33], 00:19:12.589 | 70.00th=[ 36], 80.00th=[ 41], 90.00th=[ 58], 95.00th=[ 68], 00:19:12.589 | 99.00th=[ 122], 99.50th=[ 128], 99.90th=[ 132], 99.95th=[ 138], 00:19:12.589 | 99.99th=[ 167] 00:19:12.589 bw ( KiB/s): min=115992, max=603648, per=25.51%, avg=455578.85, stdev=157873.12, samples=20 00:19:12.589 iops : min= 453, max= 2358, avg=1779.55, stdev=616.68, samples=20 00:19:12.589 lat (msec) : 20=3.10%, 50=83.32%, 100=11.19%, 250=2.40% 00:19:12.589 cpu : usr=0.56%, sys=5.17%, ctx=5028, majf=0, minf=4097 00:19:12.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:12.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:12.589 issued rwts: total=17861,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.589 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:12.589 job3: (groupid=0, jobs=1): err= 0: pid=90613: Tue Nov 19 10:19:30 2024 00:19:12.589 read: IOPS=538, BW=135MiB/s (141MB/s)(1370MiB/10172msec) 00:19:12.589 slat (usec): min=16, max=226383, avg=1781.18, stdev=10909.39 00:19:12.589 clat (msec): min=13, max=393, avg=116.84, stdev=92.97 00:19:12.589 lat (msec): min=13, max=472, avg=118.62, stdev=94.92 00:19:12.589 clat percentiles (msec): 00:19:12.589 | 1.00th=[ 17], 5.00th=[ 22], 10.00th=[ 25], 20.00th=[ 29], 00:19:12.589 | 30.00th=[ 36], 40.00th=[ 58], 50.00th=[ 69], 60.00th=[ 118], 00:19:12.589 | 70.00th=[ 213], 80.00th=[ 226], 90.00th=[ 241], 95.00th=[ 259], 00:19:12.589 | 99.00th=[ 284], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:19:12.589 | 99.99th=[ 393] 00:19:12.589 bw ( KiB/s): min=62464, max=525824, per=7.76%, avg=138612.95, stdev=137242.65, samples=20 00:19:12.589 iops : min= 244, max= 2054, avg=541.40, stdev=536.09, samples=20 00:19:12.589 lat (msec) : 20=3.10%, 50=33.35%, 100=20.43%, 250=36.11%, 500=7.01% 00:19:12.589 cpu : usr=0.21%, sys=1.71%, ctx=1552, majf=0, minf=4097 00:19:12.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:12.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:12.589 issued rwts: total=5481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.589 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:12.589 job4: (groupid=0, jobs=1): err= 0: pid=90614: Tue Nov 19 10:19:30 2024 00:19:12.589 read: IOPS=625, BW=156MiB/s (164MB/s)(1590MiB/10172msec) 00:19:12.589 slat (usec): min=12, max=197480, avg=1565.60, stdev=11814.96 00:19:12.589 clat (msec): min=8, max=422, avg=100.59, stdev=97.26 00:19:12.589 lat (msec): min=9, max=441, avg=102.16, stdev=99.38 00:19:12.589 clat percentiles (msec): 00:19:12.589 | 1.00th=[ 18], 5.00th=[ 22], 10.00th=[ 24], 20.00th=[ 27], 00:19:12.589 | 30.00th=[ 30], 40.00th=[ 33], 50.00th=[ 35], 60.00th=[ 39], 00:19:12.589 | 70.00th=[ 211], 80.00th=[ 226], 90.00th=[ 239], 95.00th=[ 247], 00:19:12.589 | 99.00th=[ 275], 99.50th=[ 355], 99.90th=[ 397], 99.95th=[ 401], 00:19:12.589 | 99.99th=[ 422] 00:19:12.589 bw ( KiB/s): min=55808, max=548864, per=9.03%, avg=161233.05, stdev=188726.82, samples=20 00:19:12.589 iops : min= 218, max= 2144, avg=629.70, stdev=737.15, samples=20 00:19:12.589 lat (msec) : 10=0.06%, 20=2.25%, 50=62.37%, 100=0.20%, 250=31.08% 00:19:12.589 lat (msec) : 500=4.03% 00:19:12.589 cpu : usr=0.28%, sys=1.90%, ctx=1658, majf=0, minf=4097 00:19:12.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:12.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:12.589 issued rwts: total=6360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.589 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:12.589 job5: (groupid=0, jobs=1): err= 0: pid=90615: Tue Nov 19 10:19:30 2024 00:19:12.589 read: IOPS=289, BW=72.4MiB/s (76.0MB/s)(736MiB/10163msec) 00:19:12.589 slat (usec): min=15, max=207298, avg=3416.76, stdev=16830.64 00:19:12.589 clat (msec): min=84, max=409, avg=217.19, stdev=36.65 00:19:12.589 lat (msec): min=113, max=453, avg=220.61, stdev=40.46 00:19:12.589 clat percentiles (msec): 00:19:12.589 | 1.00th=[ 122], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 188], 00:19:12.589 | 30.00th=[ 209], 40.00th=[ 218], 50.00th=[ 224], 60.00th=[ 228], 00:19:12.589 | 70.00th=[ 236], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 268], 00:19:12.589 | 99.00th=[ 305], 99.50th=[ 321], 99.90th=[ 409], 99.95th=[ 409], 00:19:12.589 | 99.99th=[ 409] 00:19:12.589 bw ( KiB/s): min=62600, max=96256, per=4.13%, avg=73748.50, stdev=11483.74, samples=20 00:19:12.589 iops : min= 244, max= 376, avg=288.00, stdev=44.91, samples=20 00:19:12.589 lat (msec) : 100=0.03%, 250=85.26%, 500=14.70% 00:19:12.589 cpu : usr=0.12%, sys=1.01%, ctx=675, majf=0, minf=4097 00:19:12.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:19:12.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:12.589 issued rwts: total=2945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.589 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:12.589 job6: (groupid=0, jobs=1): err= 0: pid=90616: Tue Nov 19 10:19:30 2024 00:19:12.589 read: IOPS=285, BW=71.4MiB/s (74.9MB/s)(729MiB/10209msec) 00:19:12.589 slat (usec): min=16, max=143934, avg=3428.27, stdev=13494.07 00:19:12.589 clat (msec): min=35, max=433, avg=220.33, stdev=45.09 00:19:12.589 lat (msec): min=35, max=433, avg=223.75, stdev=47.23 00:19:12.589 clat percentiles (msec): 00:19:12.589 | 1.00th=[ 69], 5.00th=[ 153], 10.00th=[ 165], 20.00th=[ 184], 00:19:12.589 | 30.00th=[ 211], 40.00th=[ 222], 50.00th=[ 228], 60.00th=[ 232], 00:19:12.589 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 259], 95.00th=[ 275], 00:19:12.589 | 99.00th=[ 384], 99.50th=[ 405], 99.90th=[ 435], 99.95th=[ 435], 00:19:12.589 | 99.99th=[ 435] 00:19:12.589 bw ( KiB/s): min=58880, max=98816, per=4.09%, avg=72988.90, stdev=11943.70, samples=20 00:19:12.589 iops : min= 230, max= 386, avg=285.05, stdev=46.66, samples=20 00:19:12.589 lat (msec) : 50=0.14%, 100=1.58%, 250=84.02%, 500=14.27% 00:19:12.589 cpu : usr=0.16%, sys=1.01%, ctx=729, majf=0, minf=4097 00:19:12.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.8% 00:19:12.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:12.589 issued rwts: total=2916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.589 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:12.589 job7: (groupid=0, jobs=1): err= 0: pid=90617: Tue Nov 19 10:19:30 2024 00:19:12.589 read: IOPS=289, BW=72.4MiB/s (75.9MB/s)(739MiB/10212msec) 00:19:12.589 slat (usec): min=11, max=109811, avg=3335.23, stdev=10907.23 00:19:12.589 clat (msec): min=49, max=467, avg=217.45, stdev=45.67 00:19:12.589 lat (msec): min=50, max=467, avg=220.78, stdev=47.43 00:19:12.589 clat percentiles (msec): 00:19:12.589 | 1.00th=[ 65], 5.00th=[ 148], 10.00th=[ 163], 20.00th=[ 184], 00:19:12.589 | 30.00th=[ 207], 40.00th=[ 220], 50.00th=[ 226], 60.00th=[ 232], 00:19:12.589 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 264], 95.00th=[ 275], 00:19:12.589 | 99.00th=[ 313], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 468], 00:19:12.589 | 99.99th=[ 468] 00:19:12.589 bw ( KiB/s): min=59904, max=108032, per=4.15%, avg=74039.45, stdev=12600.39, samples=20 00:19:12.589 iops : min= 234, max= 422, avg=289.15, stdev=49.24, samples=20 00:19:12.589 lat (msec) : 50=0.03%, 100=3.18%, 250=80.39%, 500=16.40% 00:19:12.589 cpu : usr=0.10%, sys=1.04%, ctx=719, majf=0, minf=4097 00:19:12.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:19:12.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:12.589 issued rwts: total=2957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.589 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:12.590 job8: (groupid=0, jobs=1): err= 0: pid=90618: Tue Nov 19 10:19:30 2024 00:19:12.590 read: IOPS=286, BW=71.5MiB/s (75.0MB/s)(727MiB/10170msec) 00:19:12.590 slat (usec): min=14, max=152232, avg=3379.33, stdev=13116.62 00:19:12.590 clat (msec): min=12, max=427, avg=220.08, stdev=43.67 00:19:12.590 lat (msec): min=13, max=427, avg=223.45, stdev=46.04 00:19:12.590 clat percentiles (msec): 00:19:12.590 | 1.00th=[ 140], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 184], 00:19:12.590 | 30.00th=[ 213], 40.00th=[ 222], 50.00th=[ 228], 60.00th=[ 234], 00:19:12.590 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 262], 95.00th=[ 275], 00:19:12.590 | 99.00th=[ 347], 99.50th=[ 384], 99.90th=[ 426], 99.95th=[ 426], 00:19:12.590 | 99.99th=[ 426] 00:19:12.590 bw ( KiB/s): min=51200, max=103118, per=4.08%, avg=72821.40, stdev=13588.54, samples=20 00:19:12.590 iops : min= 200, max= 402, avg=284.35, stdev=53.01, samples=20 00:19:12.590 lat (msec) : 20=0.65%, 50=0.34%, 250=83.19%, 500=15.81% 00:19:12.590 cpu : usr=0.11%, sys=1.02%, ctx=529, majf=0, minf=4097 00:19:12.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:19:12.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:12.590 issued rwts: total=2909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.590 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:12.590 job9: (groupid=0, jobs=1): err= 0: pid=90619: Tue Nov 19 10:19:30 2024 00:19:12.590 read: IOPS=1170, BW=293MiB/s (307MB/s)(2937MiB/10041msec) 00:19:12.590 slat (usec): min=15, max=57089, avg=840.39, stdev=3543.34 00:19:12.590 clat (msec): min=4, max=153, avg=53.75, stdev=19.99 00:19:12.590 lat (msec): min=5, max=182, avg=54.59, stdev=20.41 00:19:12.590 clat percentiles (msec): 00:19:12.590 | 1.00th=[ 17], 5.00th=[ 25], 10.00th=[ 28], 20.00th=[ 36], 00:19:12.590 | 30.00th=[ 46], 40.00th=[ 51], 50.00th=[ 56], 60.00th=[ 59], 00:19:12.590 | 70.00th=[ 63], 80.00th=[ 66], 90.00th=[ 72], 95.00th=[ 82], 00:19:12.590 | 99.00th=[ 126], 99.50th=[ 130], 99.90th=[ 140], 99.95th=[ 144], 00:19:12.590 | 99.99th=[ 146] 00:19:12.590 bw ( KiB/s): min=121587, max=541184, per=16.75%, avg=299068.00, stdev=97290.77, samples=20 00:19:12.590 iops : min= 474, max= 2114, avg=1168.00, stdev=380.05, samples=20 00:19:12.590 lat (msec) : 10=0.31%, 20=1.38%, 50=36.41%, 100=58.81%, 250=3.09% 00:19:12.590 cpu : usr=0.43%, sys=3.49%, ctx=2498, majf=0, minf=4097 00:19:12.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:12.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:12.590 issued rwts: total=11749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.590 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:12.590 job10: (groupid=0, jobs=1): err= 0: pid=90620: Tue Nov 19 10:19:30 2024 00:19:12.590 read: IOPS=292, BW=73.1MiB/s (76.7MB/s)(744MiB/10166msec) 00:19:12.590 slat (usec): min=15, max=158414, avg=3345.06, stdev=12344.98 00:19:12.590 clat (msec): min=40, max=368, avg=215.15, stdev=50.26 00:19:12.590 lat (msec): min=40, max=387, avg=218.50, stdev=52.29 00:19:12.590 clat percentiles (msec): 00:19:12.590 | 1.00th=[ 64], 5.00th=[ 116], 10.00th=[ 128], 20.00th=[ 182], 00:19:12.590 | 30.00th=[ 211], 40.00th=[ 224], 50.00th=[ 230], 60.00th=[ 234], 00:19:12.590 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 264], 95.00th=[ 271], 00:19:12.590 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 368], 99.95th=[ 368], 00:19:12.590 | 99.99th=[ 368] 00:19:12.590 bw ( KiB/s): min=53248, max=129024, per=4.17%, avg=74481.90, stdev=18921.03, samples=20 00:19:12.590 iops : min= 208, max= 504, avg=290.80, stdev=73.97, samples=20 00:19:12.590 lat (msec) : 50=0.17%, 100=2.19%, 250=79.59%, 500=18.06% 00:19:12.590 cpu : usr=0.11%, sys=1.06%, ctx=755, majf=0, minf=4097 00:19:12.590 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:19:12.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:12.590 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:12.590 issued rwts: total=2974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:12.590 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:12.590 00:19:12.590 Run status group 0 (all jobs): 00:19:12.590 READ: bw=1744MiB/s (1828MB/s), 71.4MiB/s-446MiB/s (74.9MB/s-467MB/s), io=17.4GiB (18.7GB), run=10019-10212msec 00:19:12.590 00:19:12.590 Disk stats (read/write): 00:19:12.590 nvme0n1: ios=5944/0, merge=0/0, ticks=1246106/0, in_queue=1246106, util=97.85% 00:19:12.590 nvme10n1: ios=23960/0, merge=0/0, ticks=1233379/0, in_queue=1233379, util=97.78% 00:19:12.590 nvme1n1: ios=35625/0, merge=0/0, ticks=1219981/0, in_queue=1219981, util=98.02% 00:19:12.590 nvme2n1: ios=10837/0, merge=0/0, ticks=1225221/0, in_queue=1225221, util=98.17% 00:19:12.590 nvme3n1: ios=12609/0, merge=0/0, ticks=1231566/0, in_queue=1231566, util=98.19% 00:19:12.590 nvme4n1: ios=5762/0, merge=0/0, ticks=1237233/0, in_queue=1237233, util=98.32% 00:19:12.590 nvme5n1: ios=5726/0, merge=0/0, ticks=1233839/0, in_queue=1233839, util=98.48% 00:19:12.590 nvme6n1: ios=5791/0, merge=0/0, ticks=1238211/0, in_queue=1238211, util=98.68% 00:19:12.590 nvme7n1: ios=5719/0, merge=0/0, ticks=1240124/0, in_queue=1240124, util=98.90% 00:19:12.590 nvme8n1: ios=23371/0, merge=0/0, ticks=1230255/0, in_queue=1230255, util=98.69% 00:19:12.590 nvme9n1: ios=5821/0, merge=0/0, ticks=1235152/0, in_queue=1235152, util=99.11% 00:19:12.590 10:19:30 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:12.590 [global] 00:19:12.590 thread=1 00:19:12.590 invalidate=1 00:19:12.590 rw=randwrite 00:19:12.590 time_based=1 00:19:12.590 runtime=10 00:19:12.590 ioengine=libaio 00:19:12.590 direct=1 00:19:12.590 bs=262144 00:19:12.590 iodepth=64 00:19:12.590 norandommap=1 00:19:12.590 numjobs=1 00:19:12.590 00:19:12.590 [job0] 00:19:12.590 filename=/dev/nvme0n1 00:19:12.590 [job1] 00:19:12.590 filename=/dev/nvme10n1 00:19:12.590 [job2] 00:19:12.590 filename=/dev/nvme1n1 00:19:12.590 [job3] 00:19:12.590 filename=/dev/nvme2n1 00:19:12.590 [job4] 00:19:12.590 filename=/dev/nvme3n1 00:19:12.590 [job5] 00:19:12.590 filename=/dev/nvme4n1 00:19:12.590 [job6] 00:19:12.590 filename=/dev/nvme5n1 00:19:12.590 [job7] 00:19:12.590 filename=/dev/nvme6n1 00:19:12.590 [job8] 00:19:12.590 filename=/dev/nvme7n1 00:19:12.590 [job9] 00:19:12.590 filename=/dev/nvme8n1 00:19:12.590 [job10] 00:19:12.590 filename=/dev/nvme9n1 00:19:12.590 Could not set queue depth (nvme0n1) 00:19:12.590 Could not set queue depth (nvme10n1) 00:19:12.590 Could not set queue depth (nvme1n1) 00:19:12.590 Could not set queue depth (nvme2n1) 00:19:12.590 Could not set queue depth (nvme3n1) 00:19:12.590 Could not set queue depth (nvme4n1) 00:19:12.590 Could not set queue depth (nvme5n1) 00:19:12.590 Could not set queue depth (nvme6n1) 00:19:12.590 Could not set queue depth (nvme7n1) 00:19:12.590 Could not set queue depth (nvme8n1) 00:19:12.590 Could not set queue depth (nvme9n1) 00:19:12.590 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:12.590 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:12.590 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:12.590 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:12.590 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:12.590 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:12.590 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:12.590 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:12.590 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:12.590 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:12.590 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:12.590 fio-3.35 00:19:12.590 Starting 11 threads 00:19:22.564 00:19:22.564 job0: (groupid=0, jobs=1): err= 0: pid=90823: Tue Nov 19 10:19:41 2024 00:19:22.564 write: IOPS=438, BW=110MiB/s (115MB/s)(1109MiB/10119msec); 0 zone resets 00:19:22.564 slat (usec): min=16, max=47284, avg=2236.80, stdev=3975.34 00:19:22.564 clat (usec): min=1662, max=252440, avg=143709.97, stdev=23072.51 00:19:22.564 lat (usec): min=1724, max=252473, avg=145946.76, stdev=23111.85 00:19:22.564 clat percentiles (msec): 00:19:22.564 | 1.00th=[ 13], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 138], 00:19:22.564 | 30.00th=[ 140], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 146], 00:19:22.564 | 70.00th=[ 150], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 167], 00:19:22.564 | 99.00th=[ 188], 99.50th=[ 201], 99.90th=[ 245], 99.95th=[ 245], 00:19:22.564 | 99.99th=[ 253] 00:19:22.564 bw ( KiB/s): min=100352, max=151040, per=6.73%, avg=111937.95, stdev=10742.60, samples=20 00:19:22.564 iops : min= 392, max= 590, avg=437.25, stdev=41.97, samples=20 00:19:22.564 lat (msec) : 2=0.02%, 4=0.20%, 10=0.56%, 20=1.04%, 50=0.09% 00:19:22.564 lat (msec) : 100=0.63%, 250=97.41%, 500=0.05% 00:19:22.564 cpu : usr=0.82%, sys=1.12%, ctx=2534, majf=0, minf=1 00:19:22.564 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:22.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:22.564 issued rwts: total=0,4436,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.564 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:22.564 job1: (groupid=0, jobs=1): err= 0: pid=90825: Tue Nov 19 10:19:41 2024 00:19:22.564 write: IOPS=571, BW=143MiB/s (150MB/s)(1442MiB/10090msec); 0 zone resets 00:19:22.564 slat (usec): min=18, max=18837, avg=1706.09, stdev=2975.84 00:19:22.564 clat (msec): min=21, max=199, avg=110.24, stdev=14.70 00:19:22.564 lat (msec): min=21, max=199, avg=111.95, stdev=14.69 00:19:22.564 clat percentiles (msec): 00:19:22.564 | 1.00th=[ 44], 5.00th=[ 100], 10.00th=[ 102], 20.00th=[ 105], 00:19:22.564 | 30.00th=[ 107], 40.00th=[ 108], 50.00th=[ 109], 60.00th=[ 110], 00:19:22.564 | 70.00th=[ 112], 80.00th=[ 117], 90.00th=[ 127], 95.00th=[ 134], 00:19:22.564 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 192], 99.95th=[ 192], 00:19:22.564 | 99.99th=[ 201] 00:19:22.564 bw ( KiB/s): min=120832, max=167936, per=8.77%, avg=145996.80, stdev=9723.42, samples=20 00:19:22.564 iops : min= 472, max= 656, avg=570.30, stdev=37.98, samples=20 00:19:22.564 lat (msec) : 50=1.23%, 100=5.03%, 250=93.74% 00:19:22.564 cpu : usr=0.91%, sys=1.55%, ctx=4612, majf=0, minf=1 00:19:22.564 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:22.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:22.564 issued rwts: total=0,5766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.564 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:22.564 job2: (groupid=0, jobs=1): err= 0: pid=90836: Tue Nov 19 10:19:41 2024 00:19:22.564 write: IOPS=531, BW=133MiB/s (139MB/s)(1342MiB/10097msec); 0 zone resets 00:19:22.564 slat (usec): min=17, max=30450, avg=1858.89, stdev=3253.39 00:19:22.564 clat (msec): min=21, max=222, avg=118.53, stdev=19.45 00:19:22.564 lat (msec): min=21, max=222, avg=120.39, stdev=19.47 00:19:22.564 clat percentiles (msec): 00:19:22.564 | 1.00th=[ 103], 5.00th=[ 104], 10.00th=[ 106], 20.00th=[ 110], 00:19:22.564 | 30.00th=[ 111], 40.00th=[ 112], 50.00th=[ 112], 60.00th=[ 114], 00:19:22.564 | 70.00th=[ 118], 80.00th=[ 124], 90.00th=[ 131], 95.00th=[ 171], 00:19:22.564 | 99.00th=[ 203], 99.50th=[ 215], 99.90th=[ 222], 99.95th=[ 224], 00:19:22.564 | 99.99th=[ 224] 00:19:22.565 bw ( KiB/s): min=86016, max=150016, per=8.16%, avg=135756.80, stdev=18172.89, samples=20 00:19:22.565 iops : min= 336, max= 586, avg=530.30, stdev=70.99, samples=20 00:19:22.565 lat (msec) : 50=0.07%, 100=0.50%, 250=99.42% 00:19:22.565 cpu : usr=1.04%, sys=1.29%, ctx=4773, majf=0, minf=1 00:19:22.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:22.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:22.565 issued rwts: total=0,5366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:22.565 job3: (groupid=0, jobs=1): err= 0: pid=90837: Tue Nov 19 10:19:41 2024 00:19:22.565 write: IOPS=606, BW=152MiB/s (159MB/s)(1529MiB/10087msec); 0 zone resets 00:19:22.565 slat (usec): min=16, max=242170, avg=1630.96, stdev=4175.38 00:19:22.565 clat (msec): min=2, max=286, avg=103.92, stdev=29.75 00:19:22.565 lat (msec): min=2, max=286, avg=105.55, stdev=29.92 00:19:22.565 clat percentiles (msec): 00:19:22.565 | 1.00th=[ 38], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 102], 00:19:22.565 | 30.00th=[ 106], 40.00th=[ 108], 50.00th=[ 109], 60.00th=[ 110], 00:19:22.565 | 70.00th=[ 112], 80.00th=[ 114], 90.00th=[ 121], 95.00th=[ 129], 00:19:22.565 | 99.00th=[ 251], 99.50th=[ 271], 99.90th=[ 284], 99.95th=[ 284], 00:19:22.565 | 99.99th=[ 288] 00:19:22.565 bw ( KiB/s): min=129024, max=275968, per=9.31%, avg=154905.60, stdev=31952.15, samples=20 00:19:22.565 iops : min= 504, max= 1078, avg=605.10, stdev=124.81, samples=20 00:19:22.565 lat (msec) : 4=0.02%, 10=0.03%, 50=11.35%, 100=4.76%, 250=82.81% 00:19:22.565 lat (msec) : 500=1.03% 00:19:22.565 cpu : usr=0.93%, sys=1.54%, ctx=4639, majf=0, minf=1 00:19:22.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:22.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:22.565 issued rwts: total=0,6114,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:22.565 job4: (groupid=0, jobs=1): err= 0: pid=90838: Tue Nov 19 10:19:41 2024 00:19:22.565 write: IOPS=607, BW=152MiB/s (159MB/s)(1534MiB/10094msec); 0 zone resets 00:19:22.565 slat (usec): min=16, max=16630, avg=1619.21, stdev=2924.34 00:19:22.565 clat (msec): min=3, max=227, avg=103.61, stdev=30.42 00:19:22.565 lat (msec): min=3, max=227, avg=105.23, stdev=30.75 00:19:22.565 clat percentiles (msec): 00:19:22.565 | 1.00th=[ 38], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 105], 00:19:22.565 | 30.00th=[ 108], 40.00th=[ 111], 50.00th=[ 112], 60.00th=[ 113], 00:19:22.565 | 70.00th=[ 115], 80.00th=[ 120], 90.00th=[ 129], 95.00th=[ 131], 00:19:22.565 | 99.00th=[ 182], 99.50th=[ 201], 99.90th=[ 222], 99.95th=[ 222], 00:19:22.565 | 99.99th=[ 228] 00:19:22.565 bw ( KiB/s): min=120832, max=382976, per=9.34%, avg=155479.65, stdev=55013.40, samples=20 00:19:22.565 iops : min= 472, max= 1496, avg=607.30, stdev=214.90, samples=20 00:19:22.565 lat (msec) : 4=0.03%, 10=0.11%, 20=0.26%, 50=15.97%, 100=0.34% 00:19:22.565 lat (msec) : 250=83.28% 00:19:22.565 cpu : usr=0.96%, sys=1.52%, ctx=6606, majf=0, minf=1 00:19:22.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:22.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:22.565 issued rwts: total=0,6137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:22.565 job5: (groupid=0, jobs=1): err= 0: pid=90839: Tue Nov 19 10:19:41 2024 00:19:22.565 write: IOPS=428, BW=107MiB/s (112MB/s)(1085MiB/10119msec); 0 zone resets 00:19:22.565 slat (usec): min=16, max=19423, avg=2240.65, stdev=3952.46 00:19:22.565 clat (msec): min=21, max=252, avg=147.00, stdev=21.10 00:19:22.565 lat (msec): min=21, max=252, avg=149.24, stdev=21.10 00:19:22.565 clat percentiles (msec): 00:19:22.565 | 1.00th=[ 57], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 140], 00:19:22.565 | 30.00th=[ 142], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 148], 00:19:22.565 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 182], 00:19:22.565 | 99.00th=[ 207], 99.50th=[ 218], 99.90th=[ 245], 99.95th=[ 245], 00:19:22.565 | 99.99th=[ 253] 00:19:22.565 bw ( KiB/s): min=92160, max=119296, per=6.58%, avg=109440.00, stdev=6982.50, samples=20 00:19:22.565 iops : min= 360, max= 466, avg=427.50, stdev=27.28, samples=20 00:19:22.565 lat (msec) : 50=0.55%, 100=2.26%, 250=97.14%, 500=0.05% 00:19:22.565 cpu : usr=0.66%, sys=1.23%, ctx=6131, majf=0, minf=1 00:19:22.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:19:22.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:22.565 issued rwts: total=0,4338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:22.565 job6: (groupid=0, jobs=1): err= 0: pid=90840: Tue Nov 19 10:19:41 2024 00:19:22.565 write: IOPS=431, BW=108MiB/s (113MB/s)(1091MiB/10109msec); 0 zone resets 00:19:22.565 slat (usec): min=18, max=71522, avg=2252.91, stdev=4061.80 00:19:22.565 clat (msec): min=34, max=240, avg=145.98, stdev=15.98 00:19:22.565 lat (msec): min=34, max=241, avg=148.23, stdev=15.83 00:19:22.565 clat percentiles (msec): 00:19:22.565 | 1.00th=[ 83], 5.00th=[ 132], 10.00th=[ 133], 20.00th=[ 140], 00:19:22.565 | 30.00th=[ 142], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 146], 00:19:22.565 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 167], 00:19:22.565 | 99.00th=[ 188], 99.50th=[ 199], 99.90th=[ 232], 99.95th=[ 232], 00:19:22.565 | 99.99th=[ 241] 00:19:22.565 bw ( KiB/s): min=96256, max=131584, per=6.61%, avg=110069.15, stdev=8031.02, samples=20 00:19:22.565 iops : min= 376, max= 514, avg=429.95, stdev=31.37, samples=20 00:19:22.565 lat (msec) : 50=0.23%, 100=1.40%, 250=98.37% 00:19:22.565 cpu : usr=0.93%, sys=0.91%, ctx=4979, majf=0, minf=1 00:19:22.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:22.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:22.565 issued rwts: total=0,4363,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:22.565 job7: (groupid=0, jobs=1): err= 0: pid=90841: Tue Nov 19 10:19:41 2024 00:19:22.565 write: IOPS=555, BW=139MiB/s (146MB/s)(1403MiB/10098msec); 0 zone resets 00:19:22.565 slat (usec): min=17, max=23322, avg=1736.84, stdev=3068.33 00:19:22.565 clat (msec): min=2, max=255, avg=113.38, stdev=21.97 00:19:22.565 lat (msec): min=2, max=258, avg=115.12, stdev=22.08 00:19:22.565 clat percentiles (msec): 00:19:22.565 | 1.00th=[ 58], 5.00th=[ 101], 10.00th=[ 102], 20.00th=[ 106], 00:19:22.565 | 30.00th=[ 107], 40.00th=[ 109], 50.00th=[ 109], 60.00th=[ 111], 00:19:22.565 | 70.00th=[ 113], 80.00th=[ 117], 90.00th=[ 127], 95.00th=[ 161], 00:19:22.565 | 99.00th=[ 192], 99.50th=[ 239], 99.90th=[ 251], 99.95th=[ 253], 00:19:22.565 | 99.99th=[ 255] 00:19:22.565 bw ( KiB/s): min=96768, max=154112, per=8.54%, avg=142133.90, stdev=13794.13, samples=20 00:19:22.565 iops : min= 378, max= 602, avg=555.10, stdev=53.91, samples=20 00:19:22.565 lat (msec) : 4=0.05%, 10=0.16%, 20=0.09%, 50=0.48%, 100=4.31% 00:19:22.565 lat (msec) : 250=94.78%, 500=0.12% 00:19:22.565 cpu : usr=0.89%, sys=1.45%, ctx=6196, majf=0, minf=1 00:19:22.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:22.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:22.565 issued rwts: total=0,5612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:22.565 job8: (groupid=0, jobs=1): err= 0: pid=90842: Tue Nov 19 10:19:41 2024 00:19:22.565 write: IOPS=1382, BW=346MiB/s (362MB/s)(3471MiB/10040msec); 0 zone resets 00:19:22.565 slat (usec): min=14, max=81787, avg=701.97, stdev=1489.89 00:19:22.565 clat (msec): min=6, max=250, avg=45.55, stdev=21.42 00:19:22.565 lat (msec): min=6, max=250, avg=46.25, stdev=21.69 00:19:22.565 clat percentiles (msec): 00:19:22.565 | 1.00th=[ 21], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:19:22.565 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 44], 00:19:22.565 | 70.00th=[ 46], 80.00th=[ 47], 90.00th=[ 50], 95.00th=[ 51], 00:19:22.565 | 99.00th=[ 186], 99.50th=[ 205], 99.90th=[ 232], 99.95th=[ 239], 00:19:22.565 | 99.99th=[ 251] 00:19:22.565 bw ( KiB/s): min=73728, max=402944, per=21.26%, avg=353792.00, stdev=73147.85, samples=20 00:19:22.565 iops : min= 288, max= 1574, avg=1382.00, stdev=285.73, samples=20 00:19:22.565 lat (msec) : 10=0.14%, 20=0.81%, 50=91.97%, 100=5.12%, 250=1.93% 00:19:22.565 lat (msec) : 500=0.02% 00:19:22.565 cpu : usr=1.99%, sys=3.35%, ctx=17533, majf=0, minf=1 00:19:22.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:22.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:22.565 issued rwts: total=0,13883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:22.565 job9: (groupid=0, jobs=1): err= 0: pid=90843: Tue Nov 19 10:19:41 2024 00:19:22.565 write: IOPS=434, BW=109MiB/s (114MB/s)(1099MiB/10119msec); 0 zone resets 00:19:22.565 slat (usec): min=15, max=20183, avg=2231.81, stdev=3898.22 00:19:22.565 clat (msec): min=6, max=258, avg=144.98, stdev=17.46 00:19:22.565 lat (msec): min=6, max=258, avg=147.21, stdev=17.35 00:19:22.565 clat percentiles (msec): 00:19:22.565 | 1.00th=[ 77], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 138], 00:19:22.565 | 30.00th=[ 140], 40.00th=[ 142], 50.00th=[ 144], 60.00th=[ 146], 00:19:22.565 | 70.00th=[ 150], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 167], 00:19:22.565 | 99.00th=[ 188], 99.50th=[ 213], 99.90th=[ 249], 99.95th=[ 249], 00:19:22.565 | 99.99th=[ 259] 00:19:22.565 bw ( KiB/s): min=100352, max=121856, per=6.67%, avg=110924.80, stdev=6845.61, samples=20 00:19:22.565 iops : min= 392, max= 476, avg=433.30, stdev=26.74, samples=20 00:19:22.565 lat (msec) : 10=0.16%, 50=0.55%, 100=0.66%, 250=98.59%, 500=0.05% 00:19:22.565 cpu : usr=0.76%, sys=1.26%, ctx=8521, majf=0, minf=1 00:19:22.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:22.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:22.566 issued rwts: total=0,4396,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.566 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:22.566 job10: (groupid=0, jobs=1): err= 0: pid=90844: Tue Nov 19 10:19:41 2024 00:19:22.566 write: IOPS=531, BW=133MiB/s (139MB/s)(1341MiB/10098msec); 0 zone resets 00:19:22.566 slat (usec): min=16, max=43127, avg=1858.97, stdev=3276.92 00:19:22.566 clat (msec): min=24, max=248, avg=118.56, stdev=21.13 00:19:22.566 lat (msec): min=24, max=248, avg=120.42, stdev=21.19 00:19:22.566 clat percentiles (msec): 00:19:22.566 | 1.00th=[ 103], 5.00th=[ 104], 10.00th=[ 106], 20.00th=[ 110], 00:19:22.566 | 30.00th=[ 111], 40.00th=[ 112], 50.00th=[ 113], 60.00th=[ 115], 00:19:22.566 | 70.00th=[ 118], 80.00th=[ 124], 90.00th=[ 131], 95.00th=[ 171], 00:19:22.566 | 99.00th=[ 209], 99.50th=[ 236], 99.90th=[ 249], 99.95th=[ 249], 00:19:22.566 | 99.99th=[ 249] 00:19:22.566 bw ( KiB/s): min=84480, max=148480, per=8.16%, avg=135731.20, stdev=18024.71, samples=20 00:19:22.566 iops : min= 330, max= 580, avg=530.20, stdev=70.41, samples=20 00:19:22.566 lat (msec) : 50=0.30%, 100=0.47%, 250=99.24% 00:19:22.566 cpu : usr=0.88%, sys=1.43%, ctx=7140, majf=0, minf=1 00:19:22.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:22.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:22.566 issued rwts: total=0,5365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.566 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:22.566 00:19:22.566 Run status group 0 (all jobs): 00:19:22.566 WRITE: bw=1625MiB/s (1704MB/s), 107MiB/s-346MiB/s (112MB/s-362MB/s), io=16.1GiB (17.2GB), run=10040-10119msec 00:19:22.566 00:19:22.566 Disk stats (read/write): 00:19:22.566 nvme0n1: ios=49/8728, merge=0/0, ticks=24/1213215, in_queue=1213239, util=97.74% 00:19:22.566 nvme10n1: ios=49/11397, merge=0/0, ticks=44/1216203, in_queue=1216247, util=98.02% 00:19:22.566 nvme1n1: ios=15/10590, merge=0/0, ticks=28/1214709, in_queue=1214737, util=98.01% 00:19:22.566 nvme2n1: ios=13/12088, merge=0/0, ticks=10/1214589, in_queue=1214599, util=98.07% 00:19:22.566 nvme3n1: ios=0/12128, merge=0/0, ticks=0/1214229, in_queue=1214229, util=98.05% 00:19:22.566 nvme4n1: ios=15/8537, merge=0/0, ticks=97/1215090, in_queue=1215187, util=98.42% 00:19:22.566 nvme5n1: ios=0/8570, merge=0/0, ticks=0/1212047, in_queue=1212047, util=98.24% 00:19:22.566 nvme6n1: ios=0/11100, merge=0/0, ticks=0/1217464, in_queue=1217464, util=98.63% 00:19:22.566 nvme7n1: ios=0/27595, merge=0/0, ticks=0/1218974, in_queue=1218974, util=98.65% 00:19:22.566 nvme8n1: ios=0/8663, merge=0/0, ticks=0/1213900, in_queue=1213900, util=98.87% 00:19:22.566 nvme9n1: ios=0/10595, merge=0/0, ticks=0/1215292, in_queue=1215292, util=98.91% 00:19:22.566 10:19:41 -- target/multiconnection.sh@36 -- # sync 00:19:22.566 10:19:41 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:22.566 10:19:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:22.566 10:19:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:22.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:22.566 10:19:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:22.566 10:19:41 -- common/autotest_common.sh@1208 -- # local i=0 00:19:22.566 10:19:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:22.566 10:19:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:19:22.566 10:19:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:22.566 10:19:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:19:22.566 10:19:41 -- common/autotest_common.sh@1220 -- # return 0 00:19:22.566 10:19:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:22.566 10:19:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.566 10:19:41 -- common/autotest_common.sh@10 -- # set +x 00:19:22.566 10:19:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.566 10:19:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:22.566 10:19:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:22.566 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:22.566 10:19:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:22.566 10:19:41 -- common/autotest_common.sh@1208 -- # local i=0 00:19:22.566 10:19:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:22.566 10:19:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:19:22.566 10:19:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:22.566 10:19:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:19:22.566 10:19:41 -- common/autotest_common.sh@1220 -- # return 0 00:19:22.566 10:19:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:22.566 10:19:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.566 10:19:41 -- common/autotest_common.sh@10 -- # set +x 00:19:22.566 10:19:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.566 10:19:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:22.566 10:19:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:22.566 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:22.566 10:19:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:22.566 10:19:41 -- common/autotest_common.sh@1208 -- # local i=0 00:19:22.566 10:19:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:22.566 10:19:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:19:22.566 10:19:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:22.566 10:19:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:19:22.566 10:19:41 -- common/autotest_common.sh@1220 -- # return 0 00:19:22.566 10:19:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:22.566 10:19:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.566 10:19:41 -- common/autotest_common.sh@10 -- # set +x 00:19:22.566 10:19:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.566 10:19:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:22.566 10:19:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:22.566 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:22.566 10:19:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:22.566 10:19:41 -- common/autotest_common.sh@1208 -- # local i=0 00:19:22.566 10:19:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:19:22.566 10:19:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:22.566 10:19:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:22.566 10:19:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:19:22.566 10:19:41 -- common/autotest_common.sh@1220 -- # return 0 00:19:22.566 10:19:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:22.566 10:19:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.566 10:19:41 -- common/autotest_common.sh@10 -- # set +x 00:19:22.566 10:19:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.566 10:19:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:22.566 10:19:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:22.566 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:22.566 10:19:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:22.566 10:19:41 -- common/autotest_common.sh@1208 -- # local i=0 00:19:22.566 10:19:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:22.566 10:19:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:19:22.566 10:19:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:22.566 10:19:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:19:22.566 10:19:41 -- common/autotest_common.sh@1220 -- # return 0 00:19:22.566 10:19:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:22.566 10:19:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.566 10:19:41 -- common/autotest_common.sh@10 -- # set +x 00:19:22.566 10:19:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.566 10:19:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:22.566 10:19:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:22.566 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:22.566 10:19:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:22.566 10:19:41 -- common/autotest_common.sh@1208 -- # local i=0 00:19:22.566 10:19:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:19:22.566 10:19:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:22.566 10:19:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:22.566 10:19:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:19:22.566 10:19:41 -- common/autotest_common.sh@1220 -- # return 0 00:19:22.566 10:19:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:22.566 10:19:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.566 10:19:41 -- common/autotest_common.sh@10 -- # set +x 00:19:22.566 10:19:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.566 10:19:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:22.566 10:19:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:22.566 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:22.566 10:19:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:22.566 10:19:41 -- common/autotest_common.sh@1208 -- # local i=0 00:19:22.566 10:19:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:22.566 10:19:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:19:22.566 10:19:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:22.566 10:19:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:19:22.566 10:19:41 -- common/autotest_common.sh@1220 -- # return 0 00:19:22.566 10:19:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:22.566 10:19:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.566 10:19:41 -- common/autotest_common.sh@10 -- # set +x 00:19:22.566 10:19:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.566 10:19:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:22.566 10:19:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:22.566 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:22.566 10:19:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:22.566 10:19:41 -- common/autotest_common.sh@1208 -- # local i=0 00:19:22.567 10:19:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:22.567 10:19:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:19:22.567 10:19:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:22.567 10:19:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:19:22.567 10:19:41 -- common/autotest_common.sh@1220 -- # return 0 00:19:22.567 10:19:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:22.567 10:19:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.567 10:19:41 -- common/autotest_common.sh@10 -- # set +x 00:19:22.567 10:19:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.567 10:19:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:22.567 10:19:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:22.567 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:22.567 10:19:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:22.567 10:19:41 -- common/autotest_common.sh@1208 -- # local i=0 00:19:22.567 10:19:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:22.567 10:19:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:19:22.567 10:19:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:19:22.567 10:19:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:22.567 10:19:41 -- common/autotest_common.sh@1220 -- # return 0 00:19:22.567 10:19:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:22.567 10:19:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.567 10:19:41 -- common/autotest_common.sh@10 -- # set +x 00:19:22.567 10:19:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.567 10:19:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:22.567 10:19:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:22.567 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:22.567 10:19:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:22.567 10:19:41 -- common/autotest_common.sh@1208 -- # local i=0 00:19:22.567 10:19:41 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:22.567 10:19:41 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:19:22.567 10:19:41 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:22.567 10:19:41 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:19:22.567 10:19:41 -- common/autotest_common.sh@1220 -- # return 0 00:19:22.567 10:19:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:22.567 10:19:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.567 10:19:41 -- common/autotest_common.sh@10 -- # set +x 00:19:22.567 10:19:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.567 10:19:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:22.567 10:19:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:22.567 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:22.567 10:19:42 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:22.567 10:19:42 -- common/autotest_common.sh@1208 -- # local i=0 00:19:22.567 10:19:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:19:22.567 10:19:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:19:22.567 10:19:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:19:22.567 10:19:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:19:22.567 10:19:42 -- common/autotest_common.sh@1220 -- # return 0 00:19:22.567 10:19:42 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:22.567 10:19:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.567 10:19:42 -- common/autotest_common.sh@10 -- # set +x 00:19:22.567 10:19:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.567 10:19:42 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:22.567 10:19:42 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:22.567 10:19:42 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:22.567 10:19:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:22.567 10:19:42 -- nvmf/common.sh@116 -- # sync 00:19:22.567 10:19:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:22.567 10:19:42 -- nvmf/common.sh@119 -- # set +e 00:19:22.567 10:19:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:22.567 10:19:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:22.567 rmmod nvme_tcp 00:19:22.567 rmmod nvme_fabrics 00:19:22.567 rmmod nvme_keyring 00:19:22.567 10:19:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:22.567 10:19:42 -- nvmf/common.sh@123 -- # set -e 00:19:22.567 10:19:42 -- nvmf/common.sh@124 -- # return 0 00:19:22.567 10:19:42 -- nvmf/common.sh@477 -- # '[' -n 90132 ']' 00:19:22.567 10:19:42 -- nvmf/common.sh@478 -- # killprocess 90132 00:19:22.567 10:19:42 -- common/autotest_common.sh@936 -- # '[' -z 90132 ']' 00:19:22.567 10:19:42 -- common/autotest_common.sh@940 -- # kill -0 90132 00:19:22.567 10:19:42 -- common/autotest_common.sh@941 -- # uname 00:19:22.567 10:19:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:22.567 10:19:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90132 00:19:22.825 10:19:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:22.825 10:19:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:22.825 killing process with pid 90132 00:19:22.825 10:19:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90132' 00:19:22.825 10:19:42 -- common/autotest_common.sh@955 -- # kill 90132 00:19:22.825 10:19:42 -- common/autotest_common.sh@960 -- # wait 90132 00:19:23.083 10:19:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:23.083 10:19:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:23.083 10:19:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:23.083 10:19:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.083 10:19:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:23.083 10:19:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.083 10:19:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.083 10:19:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.083 10:19:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:23.083 00:19:23.083 real 0m49.258s 00:19:23.083 user 2m44.964s 00:19:23.083 sys 0m23.849s 00:19:23.083 10:19:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:23.083 10:19:42 -- common/autotest_common.sh@10 -- # set +x 00:19:23.083 ************************************ 00:19:23.083 END TEST nvmf_multiconnection 00:19:23.083 ************************************ 00:19:23.083 10:19:42 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:23.083 10:19:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:23.083 10:19:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:23.083 10:19:42 -- common/autotest_common.sh@10 -- # set +x 00:19:23.083 ************************************ 00:19:23.083 START TEST nvmf_initiator_timeout 00:19:23.083 ************************************ 00:19:23.083 10:19:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:23.083 * Looking for test storage... 00:19:23.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:23.083 10:19:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:23.083 10:19:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:23.083 10:19:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:23.342 10:19:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:23.342 10:19:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:23.342 10:19:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:23.342 10:19:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:23.342 10:19:42 -- scripts/common.sh@335 -- # IFS=.-: 00:19:23.342 10:19:42 -- scripts/common.sh@335 -- # read -ra ver1 00:19:23.342 10:19:42 -- scripts/common.sh@336 -- # IFS=.-: 00:19:23.342 10:19:42 -- scripts/common.sh@336 -- # read -ra ver2 00:19:23.342 10:19:42 -- scripts/common.sh@337 -- # local 'op=<' 00:19:23.342 10:19:42 -- scripts/common.sh@339 -- # ver1_l=2 00:19:23.342 10:19:42 -- scripts/common.sh@340 -- # ver2_l=1 00:19:23.342 10:19:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:23.342 10:19:42 -- scripts/common.sh@343 -- # case "$op" in 00:19:23.342 10:19:42 -- scripts/common.sh@344 -- # : 1 00:19:23.342 10:19:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:23.342 10:19:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.342 10:19:42 -- scripts/common.sh@364 -- # decimal 1 00:19:23.342 10:19:42 -- scripts/common.sh@352 -- # local d=1 00:19:23.342 10:19:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:23.342 10:19:42 -- scripts/common.sh@354 -- # echo 1 00:19:23.342 10:19:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:23.342 10:19:42 -- scripts/common.sh@365 -- # decimal 2 00:19:23.342 10:19:42 -- scripts/common.sh@352 -- # local d=2 00:19:23.342 10:19:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:23.342 10:19:42 -- scripts/common.sh@354 -- # echo 2 00:19:23.342 10:19:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:23.342 10:19:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:23.342 10:19:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:23.342 10:19:42 -- scripts/common.sh@367 -- # return 0 00:19:23.342 10:19:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:23.342 10:19:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:23.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.343 --rc genhtml_branch_coverage=1 00:19:23.343 --rc genhtml_function_coverage=1 00:19:23.343 --rc genhtml_legend=1 00:19:23.343 --rc geninfo_all_blocks=1 00:19:23.343 --rc geninfo_unexecuted_blocks=1 00:19:23.343 00:19:23.343 ' 00:19:23.343 10:19:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:23.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.343 --rc genhtml_branch_coverage=1 00:19:23.343 --rc genhtml_function_coverage=1 00:19:23.343 --rc genhtml_legend=1 00:19:23.343 --rc geninfo_all_blocks=1 00:19:23.343 --rc geninfo_unexecuted_blocks=1 00:19:23.343 00:19:23.343 ' 00:19:23.343 10:19:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:23.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.343 --rc genhtml_branch_coverage=1 00:19:23.343 --rc genhtml_function_coverage=1 00:19:23.343 --rc genhtml_legend=1 00:19:23.343 --rc geninfo_all_blocks=1 00:19:23.343 --rc geninfo_unexecuted_blocks=1 00:19:23.343 00:19:23.343 ' 00:19:23.343 10:19:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:23.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.343 --rc genhtml_branch_coverage=1 00:19:23.343 --rc genhtml_function_coverage=1 00:19:23.343 --rc genhtml_legend=1 00:19:23.343 --rc geninfo_all_blocks=1 00:19:23.343 --rc geninfo_unexecuted_blocks=1 00:19:23.343 00:19:23.343 ' 00:19:23.343 10:19:42 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:23.343 10:19:42 -- nvmf/common.sh@7 -- # uname -s 00:19:23.343 10:19:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.343 10:19:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.343 10:19:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.343 10:19:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.343 10:19:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.343 10:19:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.343 10:19:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.343 10:19:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.343 10:19:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.343 10:19:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.343 10:19:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:19:23.343 10:19:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:19:23.343 10:19:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.343 10:19:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.343 10:19:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:23.343 10:19:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:23.343 10:19:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.343 10:19:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.343 10:19:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.343 10:19:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.343 10:19:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.343 10:19:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.343 10:19:42 -- paths/export.sh@5 -- # export PATH 00:19:23.343 10:19:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.343 10:19:42 -- nvmf/common.sh@46 -- # : 0 00:19:23.343 10:19:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:23.343 10:19:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:23.343 10:19:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:23.343 10:19:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.343 10:19:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.343 10:19:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:23.343 10:19:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:23.343 10:19:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:23.343 10:19:42 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:23.343 10:19:42 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:23.343 10:19:42 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:23.343 10:19:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:23.343 10:19:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.343 10:19:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:23.343 10:19:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:23.343 10:19:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:23.343 10:19:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.343 10:19:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.343 10:19:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.343 10:19:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:23.343 10:19:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:23.343 10:19:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:23.343 10:19:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:23.343 10:19:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:23.343 10:19:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:23.343 10:19:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.343 10:19:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.343 10:19:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:23.343 10:19:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:23.343 10:19:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:23.343 10:19:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:23.343 10:19:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:23.343 10:19:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.343 10:19:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:23.343 10:19:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:23.343 10:19:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:23.343 10:19:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:23.343 10:19:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:23.343 10:19:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:23.343 Cannot find device "nvmf_tgt_br" 00:19:23.343 10:19:42 -- nvmf/common.sh@154 -- # true 00:19:23.343 10:19:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:23.343 Cannot find device "nvmf_tgt_br2" 00:19:23.343 10:19:42 -- nvmf/common.sh@155 -- # true 00:19:23.343 10:19:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:23.343 10:19:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:23.343 Cannot find device "nvmf_tgt_br" 00:19:23.343 10:19:42 -- nvmf/common.sh@157 -- # true 00:19:23.343 10:19:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:23.343 Cannot find device "nvmf_tgt_br2" 00:19:23.343 10:19:42 -- nvmf/common.sh@158 -- # true 00:19:23.344 10:19:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:23.344 10:19:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:23.344 10:19:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:23.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:23.344 10:19:42 -- nvmf/common.sh@161 -- # true 00:19:23.344 10:19:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:23.344 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:23.344 10:19:42 -- nvmf/common.sh@162 -- # true 00:19:23.344 10:19:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:23.344 10:19:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:23.344 10:19:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:23.344 10:19:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:23.344 10:19:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:23.344 10:19:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:23.344 10:19:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:23.344 10:19:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:23.344 10:19:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:23.344 10:19:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:23.602 10:19:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:23.602 10:19:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:23.602 10:19:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:23.602 10:19:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:23.602 10:19:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:23.602 10:19:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:23.602 10:19:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:23.602 10:19:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:23.602 10:19:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:23.602 10:19:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:23.603 10:19:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:23.603 10:19:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:23.603 10:19:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:23.603 10:19:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:23.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:19:23.603 00:19:23.603 --- 10.0.0.2 ping statistics --- 00:19:23.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.603 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:19:23.603 10:19:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:23.603 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:23.603 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:19:23.603 00:19:23.603 --- 10.0.0.3 ping statistics --- 00:19:23.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.603 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:23.603 10:19:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:23.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:19:23.603 00:19:23.603 --- 10.0.0.1 ping statistics --- 00:19:23.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.603 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:23.603 10:19:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.603 10:19:43 -- nvmf/common.sh@421 -- # return 0 00:19:23.603 10:19:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:23.603 10:19:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.603 10:19:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:23.603 10:19:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:23.603 10:19:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.603 10:19:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:23.603 10:19:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:23.603 10:19:43 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:23.603 10:19:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:23.603 10:19:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:23.603 10:19:43 -- common/autotest_common.sh@10 -- # set +x 00:19:23.603 10:19:43 -- nvmf/common.sh@469 -- # nvmfpid=91215 00:19:23.603 10:19:43 -- nvmf/common.sh@470 -- # waitforlisten 91215 00:19:23.603 10:19:43 -- common/autotest_common.sh@829 -- # '[' -z 91215 ']' 00:19:23.603 10:19:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:23.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.603 10:19:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.603 10:19:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.603 10:19:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.603 10:19:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.603 10:19:43 -- common/autotest_common.sh@10 -- # set +x 00:19:23.603 [2024-11-19 10:19:43.079529] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:23.603 [2024-11-19 10:19:43.079618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.861 [2024-11-19 10:19:43.216100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.861 [2024-11-19 10:19:43.252105] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:23.861 [2024-11-19 10:19:43.252250] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.861 [2024-11-19 10:19:43.252264] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.861 [2024-11-19 10:19:43.252273] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.861 [2024-11-19 10:19:43.252397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.861 [2024-11-19 10:19:43.252892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.861 [2024-11-19 10:19:43.252942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.861 [2024-11-19 10:19:43.252947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.796 10:19:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.796 10:19:44 -- common/autotest_common.sh@862 -- # return 0 00:19:24.796 10:19:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:24.796 10:19:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:24.796 10:19:44 -- common/autotest_common.sh@10 -- # set +x 00:19:24.796 10:19:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.796 10:19:44 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:24.796 10:19:44 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:24.796 10:19:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.796 10:19:44 -- common/autotest_common.sh@10 -- # set +x 00:19:24.796 Malloc0 00:19:24.796 10:19:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.796 10:19:44 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:24.796 10:19:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.796 10:19:44 -- common/autotest_common.sh@10 -- # set +x 00:19:24.796 Delay0 00:19:24.796 10:19:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.796 10:19:44 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:24.796 10:19:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.796 10:19:44 -- common/autotest_common.sh@10 -- # set +x 00:19:24.796 [2024-11-19 10:19:44.235201] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.796 10:19:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.796 10:19:44 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:24.796 10:19:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.796 10:19:44 -- common/autotest_common.sh@10 -- # set +x 00:19:24.796 10:19:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.796 10:19:44 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:24.796 10:19:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.796 10:19:44 -- common/autotest_common.sh@10 -- # set +x 00:19:24.796 10:19:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.796 10:19:44 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:24.796 10:19:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.796 10:19:44 -- common/autotest_common.sh@10 -- # set +x 00:19:24.796 [2024-11-19 10:19:44.263421] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:24.796 10:19:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.796 10:19:44 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:25.054 10:19:44 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:25.054 10:19:44 -- common/autotest_common.sh@1187 -- # local i=0 00:19:25.054 10:19:44 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:19:25.054 10:19:44 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:19:25.054 10:19:44 -- common/autotest_common.sh@1194 -- # sleep 2 00:19:26.956 10:19:46 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:19:26.956 10:19:46 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:19:26.956 10:19:46 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:19:26.956 10:19:46 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:19:26.956 10:19:46 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:19:26.956 10:19:46 -- common/autotest_common.sh@1197 -- # return 0 00:19:26.956 10:19:46 -- target/initiator_timeout.sh@35 -- # fio_pid=91298 00:19:26.956 10:19:46 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:26.956 10:19:46 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:26.956 [global] 00:19:26.956 thread=1 00:19:26.956 invalidate=1 00:19:26.956 rw=write 00:19:26.956 time_based=1 00:19:26.956 runtime=60 00:19:26.956 ioengine=libaio 00:19:26.956 direct=1 00:19:26.956 bs=4096 00:19:26.956 iodepth=1 00:19:26.956 norandommap=0 00:19:26.956 numjobs=1 00:19:26.956 00:19:26.956 verify_dump=1 00:19:26.956 verify_backlog=512 00:19:26.956 verify_state_save=0 00:19:26.956 do_verify=1 00:19:26.956 verify=crc32c-intel 00:19:26.956 [job0] 00:19:26.956 filename=/dev/nvme0n1 00:19:26.956 Could not set queue depth (nvme0n1) 00:19:27.215 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:27.215 fio-3.35 00:19:27.215 Starting 1 thread 00:19:30.597 10:19:49 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:30.597 10:19:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.597 10:19:49 -- common/autotest_common.sh@10 -- # set +x 00:19:30.597 true 00:19:30.597 10:19:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.597 10:19:49 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:30.597 10:19:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.597 10:19:49 -- common/autotest_common.sh@10 -- # set +x 00:19:30.597 true 00:19:30.597 10:19:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.597 10:19:49 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:30.597 10:19:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.597 10:19:49 -- common/autotest_common.sh@10 -- # set +x 00:19:30.597 true 00:19:30.597 10:19:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.597 10:19:49 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:30.597 10:19:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.597 10:19:49 -- common/autotest_common.sh@10 -- # set +x 00:19:30.597 true 00:19:30.597 10:19:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.597 10:19:49 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:33.128 10:19:52 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:33.128 10:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.128 10:19:52 -- common/autotest_common.sh@10 -- # set +x 00:19:33.128 true 00:19:33.128 10:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.128 10:19:52 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:33.128 10:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.128 10:19:52 -- common/autotest_common.sh@10 -- # set +x 00:19:33.128 true 00:19:33.128 10:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.128 10:19:52 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:33.128 10:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.128 10:19:52 -- common/autotest_common.sh@10 -- # set +x 00:19:33.128 true 00:19:33.128 10:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.128 10:19:52 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:33.128 10:19:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.128 10:19:52 -- common/autotest_common.sh@10 -- # set +x 00:19:33.128 true 00:19:33.128 10:19:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.128 10:19:52 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:33.128 10:19:52 -- target/initiator_timeout.sh@54 -- # wait 91298 00:20:29.364 00:20:29.364 job0: (groupid=0, jobs=1): err= 0: pid=91319: Tue Nov 19 10:20:46 2024 00:20:29.364 read: IOPS=803, BW=3216KiB/s (3293kB/s)(188MiB/60000msec) 00:20:29.364 slat (usec): min=12, max=9933, avg=17.69, stdev=57.62 00:20:29.364 clat (usec): min=167, max=40794k, avg=1041.31, stdev=185735.49 00:20:29.364 lat (usec): min=181, max=40794k, avg=1059.00, stdev=185735.48 00:20:29.364 clat percentiles (usec): 00:20:29.364 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 184], 00:20:29.364 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:20:29.364 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 249], 00:20:29.364 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 322], 99.95th=[ 334], 00:20:29.364 | 99.99th=[ 469] 00:20:29.364 write: IOPS=810, BW=3243KiB/s (3320kB/s)(190MiB/60000msec); 0 zone resets 00:20:29.364 slat (usec): min=19, max=530, avg=24.93, stdev= 8.07 00:20:29.364 clat (usec): min=114, max=1637, avg=154.70, stdev=19.46 00:20:29.364 lat (usec): min=152, max=1658, avg=179.63, stdev=22.53 00:20:29.364 clat percentiles (usec): 00:20:29.364 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:20:29.365 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 153], 00:20:29.365 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 190], 00:20:29.365 | 99.00th=[ 219], 99.50th=[ 229], 99.90th=[ 258], 99.95th=[ 285], 00:20:29.365 | 99.99th=[ 619] 00:20:29.365 bw ( KiB/s): min= 4616, max=12288, per=100.00%, avg=9988.08, stdev=1585.08, samples=38 00:20:29.365 iops : min= 1154, max= 3072, avg=2497.00, stdev=396.26, samples=38 00:20:29.365 lat (usec) : 250=97.52%, 500=2.47%, 750=0.01%, 1000=0.01% 00:20:29.365 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:20:29.365 cpu : usr=0.58%, sys=2.60%, ctx=96887, majf=0, minf=5 00:20:29.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.365 issued rwts: total=48238,48640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:29.365 00:20:29.365 Run status group 0 (all jobs): 00:20:29.365 READ: bw=3216KiB/s (3293kB/s), 3216KiB/s-3216KiB/s (3293kB/s-3293kB/s), io=188MiB (198MB), run=60000-60000msec 00:20:29.365 WRITE: bw=3243KiB/s (3320kB/s), 3243KiB/s-3243KiB/s (3320kB/s-3320kB/s), io=190MiB (199MB), run=60000-60000msec 00:20:29.365 00:20:29.365 Disk stats (read/write): 00:20:29.365 nvme0n1: ios=48384/48227, merge=0/0, ticks=9823/7987, in_queue=17810, util=99.68% 00:20:29.365 10:20:46 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:29.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:29.365 10:20:46 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:29.365 10:20:46 -- common/autotest_common.sh@1208 -- # local i=0 00:20:29.365 10:20:46 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:29.365 10:20:46 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:29.365 10:20:46 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:29.365 10:20:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:29.365 10:20:46 -- common/autotest_common.sh@1220 -- # return 0 00:20:29.365 10:20:46 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:29.365 10:20:46 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:29.365 nvmf hotplug test: fio successful as expected 00:20:29.365 10:20:46 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:29.365 10:20:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.365 10:20:46 -- common/autotest_common.sh@10 -- # set +x 00:20:29.365 10:20:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.365 10:20:46 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:29.365 10:20:46 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:29.365 10:20:46 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:29.365 10:20:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:29.365 10:20:46 -- nvmf/common.sh@116 -- # sync 00:20:29.365 10:20:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:29.365 10:20:46 -- nvmf/common.sh@119 -- # set +e 00:20:29.365 10:20:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:29.365 10:20:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:29.365 rmmod nvme_tcp 00:20:29.365 rmmod nvme_fabrics 00:20:29.365 rmmod nvme_keyring 00:20:29.365 10:20:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:29.365 10:20:46 -- nvmf/common.sh@123 -- # set -e 00:20:29.365 10:20:46 -- nvmf/common.sh@124 -- # return 0 00:20:29.365 10:20:46 -- nvmf/common.sh@477 -- # '[' -n 91215 ']' 00:20:29.365 10:20:46 -- nvmf/common.sh@478 -- # killprocess 91215 00:20:29.365 10:20:46 -- common/autotest_common.sh@936 -- # '[' -z 91215 ']' 00:20:29.365 10:20:46 -- common/autotest_common.sh@940 -- # kill -0 91215 00:20:29.365 10:20:46 -- common/autotest_common.sh@941 -- # uname 00:20:29.365 10:20:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:29.365 10:20:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91215 00:20:29.365 10:20:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:29.365 10:20:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:29.365 killing process with pid 91215 00:20:29.365 10:20:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91215' 00:20:29.365 10:20:46 -- common/autotest_common.sh@955 -- # kill 91215 00:20:29.365 10:20:46 -- common/autotest_common.sh@960 -- # wait 91215 00:20:29.365 10:20:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:29.365 10:20:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:29.365 10:20:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:29.365 10:20:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.365 10:20:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:29.365 10:20:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.365 10:20:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.365 10:20:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.365 10:20:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:29.365 00:20:29.365 real 1m4.630s 00:20:29.365 user 4m5.707s 00:20:29.365 sys 0m9.647s 00:20:29.365 10:20:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:29.365 ************************************ 00:20:29.365 END TEST nvmf_initiator_timeout 00:20:29.365 10:20:47 -- common/autotest_common.sh@10 -- # set +x 00:20:29.365 ************************************ 00:20:29.365 10:20:47 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:20:29.365 10:20:47 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:29.365 10:20:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:29.365 10:20:47 -- common/autotest_common.sh@10 -- # set +x 00:20:29.365 10:20:47 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:29.365 10:20:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:29.365 10:20:47 -- common/autotest_common.sh@10 -- # set +x 00:20:29.365 10:20:47 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:29.365 10:20:47 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:29.365 10:20:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:29.365 10:20:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:29.365 10:20:47 -- common/autotest_common.sh@10 -- # set +x 00:20:29.365 ************************************ 00:20:29.365 START TEST nvmf_multicontroller 00:20:29.365 ************************************ 00:20:29.365 10:20:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:29.365 * Looking for test storage... 00:20:29.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:29.365 10:20:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:29.365 10:20:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:29.365 10:20:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:29.365 10:20:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:29.365 10:20:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:29.365 10:20:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:29.365 10:20:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:29.365 10:20:47 -- scripts/common.sh@335 -- # IFS=.-: 00:20:29.365 10:20:47 -- scripts/common.sh@335 -- # read -ra ver1 00:20:29.365 10:20:47 -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.365 10:20:47 -- scripts/common.sh@336 -- # read -ra ver2 00:20:29.365 10:20:47 -- scripts/common.sh@337 -- # local 'op=<' 00:20:29.365 10:20:47 -- scripts/common.sh@339 -- # ver1_l=2 00:20:29.365 10:20:47 -- scripts/common.sh@340 -- # ver2_l=1 00:20:29.365 10:20:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:29.365 10:20:47 -- scripts/common.sh@343 -- # case "$op" in 00:20:29.365 10:20:47 -- scripts/common.sh@344 -- # : 1 00:20:29.365 10:20:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:29.365 10:20:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.365 10:20:47 -- scripts/common.sh@364 -- # decimal 1 00:20:29.365 10:20:47 -- scripts/common.sh@352 -- # local d=1 00:20:29.365 10:20:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.365 10:20:47 -- scripts/common.sh@354 -- # echo 1 00:20:29.365 10:20:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:29.365 10:20:47 -- scripts/common.sh@365 -- # decimal 2 00:20:29.365 10:20:47 -- scripts/common.sh@352 -- # local d=2 00:20:29.365 10:20:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.365 10:20:47 -- scripts/common.sh@354 -- # echo 2 00:20:29.365 10:20:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:29.365 10:20:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:29.365 10:20:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:29.365 10:20:47 -- scripts/common.sh@367 -- # return 0 00:20:29.365 10:20:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.365 10:20:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:29.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.365 --rc genhtml_branch_coverage=1 00:20:29.365 --rc genhtml_function_coverage=1 00:20:29.365 --rc genhtml_legend=1 00:20:29.365 --rc geninfo_all_blocks=1 00:20:29.365 --rc geninfo_unexecuted_blocks=1 00:20:29.365 00:20:29.365 ' 00:20:29.365 10:20:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:29.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.365 --rc genhtml_branch_coverage=1 00:20:29.365 --rc genhtml_function_coverage=1 00:20:29.365 --rc genhtml_legend=1 00:20:29.365 --rc geninfo_all_blocks=1 00:20:29.365 --rc geninfo_unexecuted_blocks=1 00:20:29.365 00:20:29.365 ' 00:20:29.365 10:20:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:29.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.365 --rc genhtml_branch_coverage=1 00:20:29.365 --rc genhtml_function_coverage=1 00:20:29.365 --rc genhtml_legend=1 00:20:29.365 --rc geninfo_all_blocks=1 00:20:29.365 --rc geninfo_unexecuted_blocks=1 00:20:29.365 00:20:29.365 ' 00:20:29.365 10:20:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:29.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.365 --rc genhtml_branch_coverage=1 00:20:29.365 --rc genhtml_function_coverage=1 00:20:29.365 --rc genhtml_legend=1 00:20:29.365 --rc geninfo_all_blocks=1 00:20:29.365 --rc geninfo_unexecuted_blocks=1 00:20:29.365 00:20:29.366 ' 00:20:29.366 10:20:47 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:29.366 10:20:47 -- nvmf/common.sh@7 -- # uname -s 00:20:29.366 10:20:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.366 10:20:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.366 10:20:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.366 10:20:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.366 10:20:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.366 10:20:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.366 10:20:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.366 10:20:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.366 10:20:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.366 10:20:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.366 10:20:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:20:29.366 10:20:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:20:29.366 10:20:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.366 10:20:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.366 10:20:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:29.366 10:20:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:29.366 10:20:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.366 10:20:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.366 10:20:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.366 10:20:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.366 10:20:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.366 10:20:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.366 10:20:47 -- paths/export.sh@5 -- # export PATH 00:20:29.366 10:20:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.366 10:20:47 -- nvmf/common.sh@46 -- # : 0 00:20:29.366 10:20:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:29.366 10:20:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:29.366 10:20:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:29.366 10:20:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.366 10:20:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.366 10:20:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:29.366 10:20:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:29.366 10:20:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:29.366 10:20:47 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:29.366 10:20:47 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:29.366 10:20:47 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:29.366 10:20:47 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:29.366 10:20:47 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:29.366 10:20:47 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:29.366 10:20:47 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:29.366 10:20:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:29.366 10:20:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.366 10:20:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:29.366 10:20:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:29.366 10:20:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:29.366 10:20:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.366 10:20:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.366 10:20:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.366 10:20:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:29.366 10:20:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:29.366 10:20:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:29.366 10:20:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:29.366 10:20:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:29.366 10:20:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:29.366 10:20:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.366 10:20:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.366 10:20:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:29.366 10:20:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:29.366 10:20:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:29.366 10:20:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:29.366 10:20:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:29.366 10:20:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.366 10:20:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:29.366 10:20:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:29.366 10:20:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:29.366 10:20:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:29.366 10:20:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:29.366 10:20:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:29.366 Cannot find device "nvmf_tgt_br" 00:20:29.366 10:20:47 -- nvmf/common.sh@154 -- # true 00:20:29.366 10:20:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:29.366 Cannot find device "nvmf_tgt_br2" 00:20:29.366 10:20:47 -- nvmf/common.sh@155 -- # true 00:20:29.366 10:20:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:29.366 10:20:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:29.366 Cannot find device "nvmf_tgt_br" 00:20:29.366 10:20:47 -- nvmf/common.sh@157 -- # true 00:20:29.366 10:20:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:29.366 Cannot find device "nvmf_tgt_br2" 00:20:29.366 10:20:47 -- nvmf/common.sh@158 -- # true 00:20:29.366 10:20:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:29.366 10:20:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:29.366 10:20:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:29.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.366 10:20:47 -- nvmf/common.sh@161 -- # true 00:20:29.366 10:20:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:29.366 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.366 10:20:47 -- nvmf/common.sh@162 -- # true 00:20:29.366 10:20:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:29.366 10:20:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:29.366 10:20:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:29.366 10:20:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:29.366 10:20:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:29.366 10:20:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:29.366 10:20:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:29.366 10:20:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:29.366 10:20:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:29.366 10:20:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:29.366 10:20:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:29.366 10:20:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:29.366 10:20:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:29.366 10:20:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:29.366 10:20:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:29.366 10:20:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:29.366 10:20:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:29.366 10:20:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:29.366 10:20:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:29.366 10:20:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:29.366 10:20:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:29.366 10:20:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:29.366 10:20:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:29.366 10:20:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:29.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:20:29.366 00:20:29.366 --- 10.0.0.2 ping statistics --- 00:20:29.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.366 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:20:29.366 10:20:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:29.366 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:29.366 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:20:29.366 00:20:29.366 --- 10.0.0.3 ping statistics --- 00:20:29.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.366 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:29.366 10:20:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:29.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:29.366 00:20:29.366 --- 10.0.0.1 ping statistics --- 00:20:29.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.366 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:29.367 10:20:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.367 10:20:47 -- nvmf/common.sh@421 -- # return 0 00:20:29.367 10:20:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:29.367 10:20:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.367 10:20:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:29.367 10:20:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:29.367 10:20:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.367 10:20:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:29.367 10:20:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:29.367 10:20:47 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:29.367 10:20:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:29.367 10:20:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:29.367 10:20:47 -- common/autotest_common.sh@10 -- # set +x 00:20:29.367 10:20:47 -- nvmf/common.sh@469 -- # nvmfpid=92164 00:20:29.367 10:20:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:29.367 10:20:47 -- nvmf/common.sh@470 -- # waitforlisten 92164 00:20:29.367 10:20:47 -- common/autotest_common.sh@829 -- # '[' -z 92164 ']' 00:20:29.367 10:20:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.367 10:20:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.367 10:20:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.367 10:20:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.367 10:20:47 -- common/autotest_common.sh@10 -- # set +x 00:20:29.367 [2024-11-19 10:20:47.821168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:29.367 [2024-11-19 10:20:47.821280] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.367 [2024-11-19 10:20:47.962862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:29.367 [2024-11-19 10:20:48.006950] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:29.367 [2024-11-19 10:20:48.007174] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.367 [2024-11-19 10:20:48.007200] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.367 [2024-11-19 10:20:48.007216] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.367 [2024-11-19 10:20:48.007328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.367 [2024-11-19 10:20:48.008118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.367 [2024-11-19 10:20:48.008140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.367 10:20:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.367 10:20:48 -- common/autotest_common.sh@862 -- # return 0 00:20:29.367 10:20:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:29.367 10:20:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:29.367 10:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:29.367 10:20:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.367 10:20:48 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:29.367 10:20:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.367 10:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:29.367 [2024-11-19 10:20:48.877682] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.367 10:20:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.367 10:20:48 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:29.367 10:20:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.367 10:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:29.626 Malloc0 00:20:29.626 10:20:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.626 10:20:48 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:29.626 10:20:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.626 10:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:29.626 10:20:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.626 10:20:48 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:29.626 10:20:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.626 10:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:29.626 10:20:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.626 10:20:48 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:29.626 10:20:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.626 10:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:29.626 [2024-11-19 10:20:48.931868] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.626 10:20:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.626 10:20:48 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:29.626 10:20:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.626 10:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:29.626 [2024-11-19 10:20:48.939759] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:29.626 10:20:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.626 10:20:48 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:29.626 10:20:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.626 10:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:29.626 Malloc1 00:20:29.626 10:20:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.626 10:20:48 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:29.626 10:20:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.626 10:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:29.626 10:20:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.626 10:20:48 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:29.626 10:20:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.626 10:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:29.626 10:20:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.626 10:20:48 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:29.626 10:20:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.626 10:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:29.626 10:20:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.626 10:20:48 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:29.626 10:20:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.626 10:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:29.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.626 10:20:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.626 10:20:48 -- host/multicontroller.sh@44 -- # bdevperf_pid=92222 00:20:29.626 10:20:48 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:29.626 10:20:48 -- host/multicontroller.sh@47 -- # waitforlisten 92222 /var/tmp/bdevperf.sock 00:20:29.627 10:20:48 -- common/autotest_common.sh@829 -- # '[' -z 92222 ']' 00:20:29.627 10:20:48 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:29.627 10:20:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.627 10:20:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.627 10:20:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.627 10:20:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.627 10:20:48 -- common/autotest_common.sh@10 -- # set +x 00:20:30.560 10:20:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:30.560 10:20:50 -- common/autotest_common.sh@862 -- # return 0 00:20:30.560 10:20:50 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:30.560 10:20:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.560 10:20:50 -- common/autotest_common.sh@10 -- # set +x 00:20:30.819 NVMe0n1 00:20:30.819 10:20:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.819 10:20:50 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:30.819 10:20:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.819 10:20:50 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:30.819 10:20:50 -- common/autotest_common.sh@10 -- # set +x 00:20:30.819 10:20:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.819 1 00:20:30.819 10:20:50 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:30.819 10:20:50 -- common/autotest_common.sh@650 -- # local es=0 00:20:30.819 10:20:50 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:30.819 10:20:50 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:30.819 10:20:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.819 10:20:50 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:30.819 10:20:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.819 10:20:50 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:30.819 10:20:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.819 10:20:50 -- common/autotest_common.sh@10 -- # set +x 00:20:30.819 2024/11/19 10:20:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:30.819 request: 00:20:30.819 { 00:20:30.819 "method": "bdev_nvme_attach_controller", 00:20:30.819 "params": { 00:20:30.819 "name": "NVMe0", 00:20:30.819 "trtype": "tcp", 00:20:30.819 "traddr": "10.0.0.2", 00:20:30.820 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:30.820 "hostaddr": "10.0.0.2", 00:20:30.820 "hostsvcid": "60000", 00:20:30.820 "adrfam": "ipv4", 00:20:30.820 "trsvcid": "4420", 00:20:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:30.820 } 00:20:30.820 } 00:20:30.820 Got JSON-RPC error response 00:20:30.820 GoRPCClient: error on JSON-RPC call 00:20:30.820 10:20:50 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:30.820 10:20:50 -- common/autotest_common.sh@653 -- # es=1 00:20:30.820 10:20:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:30.820 10:20:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:30.820 10:20:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:30.820 10:20:50 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:30.820 10:20:50 -- common/autotest_common.sh@650 -- # local es=0 00:20:30.820 10:20:50 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:30.820 10:20:50 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:30.820 10:20:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.820 10:20:50 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:30.820 10:20:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.820 10:20:50 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:30.820 10:20:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.820 10:20:50 -- common/autotest_common.sh@10 -- # set +x 00:20:30.820 2024/11/19 10:20:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:30.820 request: 00:20:30.820 { 00:20:30.820 "method": "bdev_nvme_attach_controller", 00:20:30.820 "params": { 00:20:30.820 "name": "NVMe0", 00:20:30.820 "trtype": "tcp", 00:20:30.820 "traddr": "10.0.0.2", 00:20:30.820 "hostaddr": "10.0.0.2", 00:20:30.820 "hostsvcid": "60000", 00:20:30.820 "adrfam": "ipv4", 00:20:30.820 "trsvcid": "4420", 00:20:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:30.820 } 00:20:30.820 } 00:20:30.820 Got JSON-RPC error response 00:20:30.820 GoRPCClient: error on JSON-RPC call 00:20:30.820 10:20:50 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:30.820 10:20:50 -- common/autotest_common.sh@653 -- # es=1 00:20:30.820 10:20:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:30.820 10:20:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:30.820 10:20:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:30.820 10:20:50 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:30.820 10:20:50 -- common/autotest_common.sh@650 -- # local es=0 00:20:30.820 10:20:50 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:30.820 10:20:50 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:30.820 10:20:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.820 10:20:50 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:30.820 10:20:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.820 10:20:50 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:30.820 10:20:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.820 10:20:50 -- common/autotest_common.sh@10 -- # set +x 00:20:30.820 2024/11/19 10:20:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:30.820 request: 00:20:30.820 { 00:20:30.820 "method": "bdev_nvme_attach_controller", 00:20:30.820 "params": { 00:20:30.820 "name": "NVMe0", 00:20:30.820 "trtype": "tcp", 00:20:30.820 "traddr": "10.0.0.2", 00:20:30.820 "hostaddr": "10.0.0.2", 00:20:30.820 "hostsvcid": "60000", 00:20:30.820 "adrfam": "ipv4", 00:20:30.820 "trsvcid": "4420", 00:20:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.820 "multipath": "disable" 00:20:30.820 } 00:20:30.820 } 00:20:30.820 Got JSON-RPC error response 00:20:30.820 GoRPCClient: error on JSON-RPC call 00:20:30.820 10:20:50 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:30.820 10:20:50 -- common/autotest_common.sh@653 -- # es=1 00:20:30.820 10:20:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:30.820 10:20:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:30.820 10:20:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:30.820 10:20:50 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:30.820 10:20:50 -- common/autotest_common.sh@650 -- # local es=0 00:20:30.820 10:20:50 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:30.820 10:20:50 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:30.820 10:20:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.820 10:20:50 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:30.820 10:20:50 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.820 10:20:50 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:30.820 10:20:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.820 10:20:50 -- common/autotest_common.sh@10 -- # set +x 00:20:30.820 2024/11/19 10:20:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:30.820 request: 00:20:30.820 { 00:20:30.820 "method": "bdev_nvme_attach_controller", 00:20:30.820 "params": { 00:20:30.820 "name": "NVMe0", 00:20:30.820 "trtype": "tcp", 00:20:30.820 "traddr": "10.0.0.2", 00:20:30.820 "hostaddr": "10.0.0.2", 00:20:30.820 "hostsvcid": "60000", 00:20:30.820 "adrfam": "ipv4", 00:20:30.820 "trsvcid": "4420", 00:20:30.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.820 "multipath": "failover" 00:20:30.820 } 00:20:30.820 } 00:20:30.820 Got JSON-RPC error response 00:20:30.820 GoRPCClient: error on JSON-RPC call 00:20:30.820 10:20:50 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:30.820 10:20:50 -- common/autotest_common.sh@653 -- # es=1 00:20:30.820 10:20:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:30.820 10:20:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:30.820 10:20:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:30.820 10:20:50 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:30.820 10:20:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.820 10:20:50 -- common/autotest_common.sh@10 -- # set +x 00:20:30.820 00:20:30.820 10:20:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.820 10:20:50 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:30.820 10:20:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.820 10:20:50 -- common/autotest_common.sh@10 -- # set +x 00:20:30.820 10:20:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.820 10:20:50 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:30.820 10:20:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.820 10:20:50 -- common/autotest_common.sh@10 -- # set +x 00:20:30.820 00:20:30.820 10:20:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.820 10:20:50 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:30.820 10:20:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.820 10:20:50 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:30.820 10:20:50 -- common/autotest_common.sh@10 -- # set +x 00:20:31.078 10:20:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.079 10:20:50 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:31.079 10:20:50 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:32.013 0 00:20:32.013 10:20:51 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:32.013 10:20:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.013 10:20:51 -- common/autotest_common.sh@10 -- # set +x 00:20:32.013 10:20:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.013 10:20:51 -- host/multicontroller.sh@100 -- # killprocess 92222 00:20:32.013 10:20:51 -- common/autotest_common.sh@936 -- # '[' -z 92222 ']' 00:20:32.013 10:20:51 -- common/autotest_common.sh@940 -- # kill -0 92222 00:20:32.013 10:20:51 -- common/autotest_common.sh@941 -- # uname 00:20:32.013 10:20:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:32.013 10:20:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92222 00:20:32.013 10:20:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:32.013 10:20:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:32.013 killing process with pid 92222 00:20:32.013 10:20:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92222' 00:20:32.013 10:20:51 -- common/autotest_common.sh@955 -- # kill 92222 00:20:32.013 10:20:51 -- common/autotest_common.sh@960 -- # wait 92222 00:20:32.272 10:20:51 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:32.272 10:20:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.272 10:20:51 -- common/autotest_common.sh@10 -- # set +x 00:20:32.272 10:20:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.272 10:20:51 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:32.272 10:20:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.272 10:20:51 -- common/autotest_common.sh@10 -- # set +x 00:20:32.272 10:20:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.272 10:20:51 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:32.272 10:20:51 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:32.272 10:20:51 -- common/autotest_common.sh@1607 -- # read -r file 00:20:32.272 10:20:51 -- common/autotest_common.sh@1606 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:32.272 10:20:51 -- common/autotest_common.sh@1606 -- # sort -u 00:20:32.272 10:20:51 -- common/autotest_common.sh@1608 -- # cat 00:20:32.272 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:32.272 [2024-11-19 10:20:49.052270] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:32.272 [2024-11-19 10:20:49.052423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92222 ] 00:20:32.272 [2024-11-19 10:20:49.229554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.272 [2024-11-19 10:20:49.272316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.272 [2024-11-19 10:20:50.348727] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 194ae682-5750-4274-b939-268dbb1bcbdc already exists 00:20:32.272 [2024-11-19 10:20:50.348804] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:194ae682-5750-4274-b939-268dbb1bcbdc alias for bdev NVMe1n1 00:20:32.272 [2024-11-19 10:20:50.348838] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:32.272 Running I/O for 1 seconds... 00:20:32.272 00:20:32.272 Latency(us) 00:20:32.272 [2024-11-19T10:20:51.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.272 [2024-11-19T10:20:51.818Z] Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:32.272 NVMe0n1 : 1.00 18998.47 74.21 0.00 0.00 6727.56 2070.34 21924.77 00:20:32.272 [2024-11-19T10:20:51.818Z] =================================================================================================================== 00:20:32.272 [2024-11-19T10:20:51.818Z] Total : 18998.47 74.21 0.00 0.00 6727.56 2070.34 21924.77 00:20:32.272 Received shutdown signal, test time was about 1.000000 seconds 00:20:32.272 00:20:32.272 Latency(us) 00:20:32.272 [2024-11-19T10:20:51.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.272 [2024-11-19T10:20:51.818Z] =================================================================================================================== 00:20:32.272 [2024-11-19T10:20:51.818Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.272 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:32.272 10:20:51 -- common/autotest_common.sh@1613 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:32.272 10:20:51 -- common/autotest_common.sh@1607 -- # read -r file 00:20:32.272 10:20:51 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:32.272 10:20:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:32.272 10:20:51 -- nvmf/common.sh@116 -- # sync 00:20:32.272 10:20:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:32.272 10:20:51 -- nvmf/common.sh@119 -- # set +e 00:20:32.272 10:20:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:32.272 10:20:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:32.272 rmmod nvme_tcp 00:20:32.272 rmmod nvme_fabrics 00:20:32.272 rmmod nvme_keyring 00:20:32.272 10:20:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:32.272 10:20:51 -- nvmf/common.sh@123 -- # set -e 00:20:32.272 10:20:51 -- nvmf/common.sh@124 -- # return 0 00:20:32.272 10:20:51 -- nvmf/common.sh@477 -- # '[' -n 92164 ']' 00:20:32.272 10:20:51 -- nvmf/common.sh@478 -- # killprocess 92164 00:20:32.272 10:20:51 -- common/autotest_common.sh@936 -- # '[' -z 92164 ']' 00:20:32.272 10:20:51 -- common/autotest_common.sh@940 -- # kill -0 92164 00:20:32.272 10:20:51 -- common/autotest_common.sh@941 -- # uname 00:20:32.272 10:20:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:32.272 10:20:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92164 00:20:32.531 10:20:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:32.531 10:20:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:32.531 killing process with pid 92164 00:20:32.531 10:20:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92164' 00:20:32.531 10:20:51 -- common/autotest_common.sh@955 -- # kill 92164 00:20:32.531 10:20:51 -- common/autotest_common.sh@960 -- # wait 92164 00:20:32.531 10:20:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:32.531 10:20:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:32.531 10:20:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:32.531 10:20:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.531 10:20:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:32.531 10:20:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.531 10:20:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.531 10:20:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.531 10:20:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:32.531 00:20:32.531 real 0m4.837s 00:20:32.531 user 0m15.341s 00:20:32.531 sys 0m0.972s 00:20:32.531 10:20:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:32.531 10:20:52 -- common/autotest_common.sh@10 -- # set +x 00:20:32.531 ************************************ 00:20:32.531 END TEST nvmf_multicontroller 00:20:32.531 ************************************ 00:20:32.789 10:20:52 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:32.789 10:20:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:32.789 10:20:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:32.789 10:20:52 -- common/autotest_common.sh@10 -- # set +x 00:20:32.789 ************************************ 00:20:32.789 START TEST nvmf_aer 00:20:32.789 ************************************ 00:20:32.789 10:20:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:32.789 * Looking for test storage... 00:20:32.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:32.789 10:20:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:32.789 10:20:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:32.789 10:20:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:32.789 10:20:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:32.789 10:20:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:32.789 10:20:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:32.789 10:20:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:32.789 10:20:52 -- scripts/common.sh@335 -- # IFS=.-: 00:20:32.789 10:20:52 -- scripts/common.sh@335 -- # read -ra ver1 00:20:32.789 10:20:52 -- scripts/common.sh@336 -- # IFS=.-: 00:20:32.789 10:20:52 -- scripts/common.sh@336 -- # read -ra ver2 00:20:32.789 10:20:52 -- scripts/common.sh@337 -- # local 'op=<' 00:20:32.789 10:20:52 -- scripts/common.sh@339 -- # ver1_l=2 00:20:32.789 10:20:52 -- scripts/common.sh@340 -- # ver2_l=1 00:20:32.789 10:20:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:32.789 10:20:52 -- scripts/common.sh@343 -- # case "$op" in 00:20:32.789 10:20:52 -- scripts/common.sh@344 -- # : 1 00:20:32.789 10:20:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:32.789 10:20:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:32.789 10:20:52 -- scripts/common.sh@364 -- # decimal 1 00:20:32.789 10:20:52 -- scripts/common.sh@352 -- # local d=1 00:20:32.789 10:20:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:32.789 10:20:52 -- scripts/common.sh@354 -- # echo 1 00:20:32.789 10:20:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:32.789 10:20:52 -- scripts/common.sh@365 -- # decimal 2 00:20:32.789 10:20:52 -- scripts/common.sh@352 -- # local d=2 00:20:32.789 10:20:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:32.789 10:20:52 -- scripts/common.sh@354 -- # echo 2 00:20:32.789 10:20:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:32.789 10:20:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:32.789 10:20:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:32.789 10:20:52 -- scripts/common.sh@367 -- # return 0 00:20:32.789 10:20:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:32.789 10:20:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:32.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.789 --rc genhtml_branch_coverage=1 00:20:32.789 --rc genhtml_function_coverage=1 00:20:32.789 --rc genhtml_legend=1 00:20:32.789 --rc geninfo_all_blocks=1 00:20:32.789 --rc geninfo_unexecuted_blocks=1 00:20:32.789 00:20:32.789 ' 00:20:32.789 10:20:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:32.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.789 --rc genhtml_branch_coverage=1 00:20:32.789 --rc genhtml_function_coverage=1 00:20:32.789 --rc genhtml_legend=1 00:20:32.789 --rc geninfo_all_blocks=1 00:20:32.789 --rc geninfo_unexecuted_blocks=1 00:20:32.789 00:20:32.789 ' 00:20:32.789 10:20:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:32.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.789 --rc genhtml_branch_coverage=1 00:20:32.789 --rc genhtml_function_coverage=1 00:20:32.789 --rc genhtml_legend=1 00:20:32.789 --rc geninfo_all_blocks=1 00:20:32.789 --rc geninfo_unexecuted_blocks=1 00:20:32.789 00:20:32.789 ' 00:20:32.789 10:20:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:32.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.789 --rc genhtml_branch_coverage=1 00:20:32.789 --rc genhtml_function_coverage=1 00:20:32.789 --rc genhtml_legend=1 00:20:32.789 --rc geninfo_all_blocks=1 00:20:32.789 --rc geninfo_unexecuted_blocks=1 00:20:32.789 00:20:32.789 ' 00:20:32.789 10:20:52 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:32.789 10:20:52 -- nvmf/common.sh@7 -- # uname -s 00:20:32.789 10:20:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.789 10:20:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.789 10:20:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.789 10:20:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.789 10:20:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.789 10:20:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.789 10:20:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.789 10:20:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.789 10:20:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.789 10:20:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.789 10:20:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:20:32.789 10:20:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:20:32.789 10:20:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.789 10:20:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.789 10:20:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:32.789 10:20:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:32.789 10:20:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.789 10:20:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.789 10:20:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.789 10:20:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.790 10:20:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.790 10:20:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.790 10:20:52 -- paths/export.sh@5 -- # export PATH 00:20:32.790 10:20:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.790 10:20:52 -- nvmf/common.sh@46 -- # : 0 00:20:32.790 10:20:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:32.790 10:20:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:32.790 10:20:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:32.790 10:20:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.790 10:20:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.790 10:20:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:32.790 10:20:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:32.790 10:20:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:32.790 10:20:52 -- host/aer.sh@11 -- # nvmftestinit 00:20:32.790 10:20:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:32.790 10:20:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.790 10:20:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:32.790 10:20:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:32.790 10:20:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:32.790 10:20:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.790 10:20:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.790 10:20:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.790 10:20:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:32.790 10:20:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:32.790 10:20:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:32.790 10:20:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:32.790 10:20:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:32.790 10:20:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:32.790 10:20:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.790 10:20:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.790 10:20:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:32.790 10:20:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:32.790 10:20:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:32.790 10:20:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:32.790 10:20:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:32.790 10:20:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.790 10:20:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:32.790 10:20:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:32.790 10:20:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:32.790 10:20:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:32.790 10:20:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:32.790 10:20:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:32.790 Cannot find device "nvmf_tgt_br" 00:20:32.790 10:20:52 -- nvmf/common.sh@154 -- # true 00:20:32.790 10:20:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:32.790 Cannot find device "nvmf_tgt_br2" 00:20:32.790 10:20:52 -- nvmf/common.sh@155 -- # true 00:20:32.790 10:20:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:32.790 10:20:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:33.053 Cannot find device "nvmf_tgt_br" 00:20:33.053 10:20:52 -- nvmf/common.sh@157 -- # true 00:20:33.053 10:20:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:33.053 Cannot find device "nvmf_tgt_br2" 00:20:33.053 10:20:52 -- nvmf/common.sh@158 -- # true 00:20:33.053 10:20:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:33.053 10:20:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:33.053 10:20:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:33.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:33.053 10:20:52 -- nvmf/common.sh@161 -- # true 00:20:33.053 10:20:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:33.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:33.053 10:20:52 -- nvmf/common.sh@162 -- # true 00:20:33.053 10:20:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:33.053 10:20:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:33.053 10:20:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:33.053 10:20:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:33.053 10:20:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:33.053 10:20:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:33.053 10:20:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:33.053 10:20:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:33.053 10:20:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:33.053 10:20:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:33.053 10:20:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:33.053 10:20:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:33.053 10:20:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:33.053 10:20:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:33.053 10:20:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:33.053 10:20:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:33.053 10:20:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:33.053 10:20:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:33.053 10:20:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:33.053 10:20:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:33.053 10:20:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:33.053 10:20:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:33.054 10:20:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:33.054 10:20:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:33.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:20:33.054 00:20:33.054 --- 10.0.0.2 ping statistics --- 00:20:33.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.054 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:20:33.054 10:20:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:33.054 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:33.054 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:20:33.054 00:20:33.054 --- 10.0.0.3 ping statistics --- 00:20:33.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.054 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:33.054 10:20:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:33.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:20:33.054 00:20:33.054 --- 10.0.0.1 ping statistics --- 00:20:33.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.054 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:33.054 10:20:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.054 10:20:52 -- nvmf/common.sh@421 -- # return 0 00:20:33.054 10:20:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:33.054 10:20:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.054 10:20:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:33.054 10:20:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:33.054 10:20:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.054 10:20:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:33.054 10:20:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:33.332 10:20:52 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:33.332 10:20:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:33.332 10:20:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:33.332 10:20:52 -- common/autotest_common.sh@10 -- # set +x 00:20:33.332 10:20:52 -- nvmf/common.sh@469 -- # nvmfpid=92475 00:20:33.332 10:20:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:33.332 10:20:52 -- nvmf/common.sh@470 -- # waitforlisten 92475 00:20:33.332 10:20:52 -- common/autotest_common.sh@829 -- # '[' -z 92475 ']' 00:20:33.332 10:20:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.332 10:20:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.332 10:20:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.332 10:20:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.332 10:20:52 -- common/autotest_common.sh@10 -- # set +x 00:20:33.332 [2024-11-19 10:20:52.676147] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:33.332 [2024-11-19 10:20:52.676250] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.332 [2024-11-19 10:20:52.818440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:33.332 [2024-11-19 10:20:52.859703] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:33.332 [2024-11-19 10:20:52.859891] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.332 [2024-11-19 10:20:52.859908] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.332 [2024-11-19 10:20:52.859919] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.332 [2024-11-19 10:20:52.860043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.332 [2024-11-19 10:20:52.860135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.332 [2024-11-19 10:20:52.860656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:33.332 [2024-11-19 10:20:52.860675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.267 10:20:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:34.267 10:20:53 -- common/autotest_common.sh@862 -- # return 0 00:20:34.267 10:20:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:34.267 10:20:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:34.267 10:20:53 -- common/autotest_common.sh@10 -- # set +x 00:20:34.267 10:20:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.267 10:20:53 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:34.267 10:20:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.267 10:20:53 -- common/autotest_common.sh@10 -- # set +x 00:20:34.267 [2024-11-19 10:20:53.709010] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.267 10:20:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.267 10:20:53 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:34.267 10:20:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.267 10:20:53 -- common/autotest_common.sh@10 -- # set +x 00:20:34.267 Malloc0 00:20:34.267 10:20:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.267 10:20:53 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:34.267 10:20:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.267 10:20:53 -- common/autotest_common.sh@10 -- # set +x 00:20:34.267 10:20:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.267 10:20:53 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:34.267 10:20:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.267 10:20:53 -- common/autotest_common.sh@10 -- # set +x 00:20:34.267 10:20:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.267 10:20:53 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:34.267 10:20:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.267 10:20:53 -- common/autotest_common.sh@10 -- # set +x 00:20:34.267 [2024-11-19 10:20:53.776504] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.267 10:20:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.267 10:20:53 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:34.267 10:20:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.267 10:20:53 -- common/autotest_common.sh@10 -- # set +x 00:20:34.267 [2024-11-19 10:20:53.784246] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:34.267 [ 00:20:34.267 { 00:20:34.267 "allow_any_host": true, 00:20:34.267 "hosts": [], 00:20:34.267 "listen_addresses": [], 00:20:34.267 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:34.267 "subtype": "Discovery" 00:20:34.267 }, 00:20:34.267 { 00:20:34.267 "allow_any_host": true, 00:20:34.267 "hosts": [], 00:20:34.267 "listen_addresses": [ 00:20:34.267 { 00:20:34.267 "adrfam": "IPv4", 00:20:34.267 "traddr": "10.0.0.2", 00:20:34.267 "transport": "TCP", 00:20:34.267 "trsvcid": "4420", 00:20:34.267 "trtype": "TCP" 00:20:34.267 } 00:20:34.267 ], 00:20:34.267 "max_cntlid": 65519, 00:20:34.267 "max_namespaces": 2, 00:20:34.267 "min_cntlid": 1, 00:20:34.267 "model_number": "SPDK bdev Controller", 00:20:34.267 "namespaces": [ 00:20:34.267 { 00:20:34.267 "bdev_name": "Malloc0", 00:20:34.267 "name": "Malloc0", 00:20:34.267 "nguid": "C5A4ED565DEC48B0B44FC2609755374E", 00:20:34.267 "nsid": 1, 00:20:34.267 "uuid": "c5a4ed56-5dec-48b0-b44f-c2609755374e" 00:20:34.267 } 00:20:34.267 ], 00:20:34.267 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.267 "serial_number": "SPDK00000000000001", 00:20:34.267 "subtype": "NVMe" 00:20:34.267 } 00:20:34.267 ] 00:20:34.267 10:20:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.267 10:20:53 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:34.267 10:20:53 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:34.267 10:20:53 -- host/aer.sh@33 -- # aerpid=92529 00:20:34.267 10:20:53 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:34.267 10:20:53 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:34.267 10:20:53 -- common/autotest_common.sh@1254 -- # local i=0 00:20:34.267 10:20:53 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:34.267 10:20:53 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:20:34.267 10:20:53 -- common/autotest_common.sh@1257 -- # i=1 00:20:34.267 10:20:53 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:34.526 10:20:53 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:34.526 10:20:53 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:20:34.526 10:20:53 -- common/autotest_common.sh@1257 -- # i=2 00:20:34.526 10:20:53 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:20:34.526 10:20:54 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:34.526 10:20:54 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:34.526 10:20:54 -- common/autotest_common.sh@1265 -- # return 0 00:20:34.526 10:20:54 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:34.526 10:20:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.526 10:20:54 -- common/autotest_common.sh@10 -- # set +x 00:20:34.526 Malloc1 00:20:34.526 10:20:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.526 10:20:54 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:34.526 10:20:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.526 10:20:54 -- common/autotest_common.sh@10 -- # set +x 00:20:34.526 10:20:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.526 10:20:54 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:34.526 10:20:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.526 10:20:54 -- common/autotest_common.sh@10 -- # set +x 00:20:34.526 Asynchronous Event Request test 00:20:34.526 Attaching to 10.0.0.2 00:20:34.526 Attached to 10.0.0.2 00:20:34.526 Registering asynchronous event callbacks... 00:20:34.526 Starting namespace attribute notice tests for all controllers... 00:20:34.526 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:34.526 aer_cb - Changed Namespace 00:20:34.526 Cleaning up... 00:20:34.526 [ 00:20:34.526 { 00:20:34.526 "allow_any_host": true, 00:20:34.526 "hosts": [], 00:20:34.526 "listen_addresses": [], 00:20:34.526 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:34.526 "subtype": "Discovery" 00:20:34.526 }, 00:20:34.526 { 00:20:34.526 "allow_any_host": true, 00:20:34.526 "hosts": [], 00:20:34.526 "listen_addresses": [ 00:20:34.526 { 00:20:34.526 "adrfam": "IPv4", 00:20:34.526 "traddr": "10.0.0.2", 00:20:34.526 "transport": "TCP", 00:20:34.526 "trsvcid": "4420", 00:20:34.526 "trtype": "TCP" 00:20:34.526 } 00:20:34.526 ], 00:20:34.526 "max_cntlid": 65519, 00:20:34.526 "max_namespaces": 2, 00:20:34.526 "min_cntlid": 1, 00:20:34.526 "model_number": "SPDK bdev Controller", 00:20:34.526 "namespaces": [ 00:20:34.526 { 00:20:34.526 "bdev_name": "Malloc0", 00:20:34.526 "name": "Malloc0", 00:20:34.526 "nguid": "C5A4ED565DEC48B0B44FC2609755374E", 00:20:34.526 "nsid": 1, 00:20:34.526 "uuid": "c5a4ed56-5dec-48b0-b44f-c2609755374e" 00:20:34.526 }, 00:20:34.526 { 00:20:34.526 "bdev_name": "Malloc1", 00:20:34.526 "name": "Malloc1", 00:20:34.526 "nguid": "32445BE035404F48A6DEC1758B60E17A", 00:20:34.526 "nsid": 2, 00:20:34.785 "uuid": "32445be0-3540-4f48-a6de-c1758b60e17a" 00:20:34.785 } 00:20:34.785 ], 00:20:34.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.785 "serial_number": "SPDK00000000000001", 00:20:34.785 "subtype": "NVMe" 00:20:34.785 } 00:20:34.785 ] 00:20:34.785 10:20:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.785 10:20:54 -- host/aer.sh@43 -- # wait 92529 00:20:34.785 10:20:54 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:34.785 10:20:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.785 10:20:54 -- common/autotest_common.sh@10 -- # set +x 00:20:34.785 10:20:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.785 10:20:54 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:34.785 10:20:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.785 10:20:54 -- common/autotest_common.sh@10 -- # set +x 00:20:34.785 10:20:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.785 10:20:54 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:34.785 10:20:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.785 10:20:54 -- common/autotest_common.sh@10 -- # set +x 00:20:34.785 10:20:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.785 10:20:54 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:34.785 10:20:54 -- host/aer.sh@51 -- # nvmftestfini 00:20:34.785 10:20:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:34.785 10:20:54 -- nvmf/common.sh@116 -- # sync 00:20:34.785 10:20:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:34.785 10:20:54 -- nvmf/common.sh@119 -- # set +e 00:20:34.785 10:20:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:34.785 10:20:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:34.785 rmmod nvme_tcp 00:20:34.785 rmmod nvme_fabrics 00:20:34.785 rmmod nvme_keyring 00:20:34.785 10:20:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:34.785 10:20:54 -- nvmf/common.sh@123 -- # set -e 00:20:34.785 10:20:54 -- nvmf/common.sh@124 -- # return 0 00:20:34.785 10:20:54 -- nvmf/common.sh@477 -- # '[' -n 92475 ']' 00:20:34.785 10:20:54 -- nvmf/common.sh@478 -- # killprocess 92475 00:20:34.785 10:20:54 -- common/autotest_common.sh@936 -- # '[' -z 92475 ']' 00:20:34.785 10:20:54 -- common/autotest_common.sh@940 -- # kill -0 92475 00:20:34.785 10:20:54 -- common/autotest_common.sh@941 -- # uname 00:20:34.785 10:20:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:34.785 10:20:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92475 00:20:34.785 10:20:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:34.785 10:20:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:34.785 10:20:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92475' 00:20:34.785 killing process with pid 92475 00:20:34.785 10:20:54 -- common/autotest_common.sh@955 -- # kill 92475 00:20:34.785 [2024-11-19 10:20:54.238563] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:34.785 10:20:54 -- common/autotest_common.sh@960 -- # wait 92475 00:20:35.044 10:20:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:35.044 10:20:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:35.044 10:20:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:35.044 10:20:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:35.044 10:20:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:35.044 10:20:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.044 10:20:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.044 10:20:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.044 10:20:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:35.044 00:20:35.044 real 0m2.332s 00:20:35.044 user 0m6.433s 00:20:35.044 sys 0m0.616s 00:20:35.044 10:20:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:35.044 10:20:54 -- common/autotest_common.sh@10 -- # set +x 00:20:35.044 ************************************ 00:20:35.044 END TEST nvmf_aer 00:20:35.044 ************************************ 00:20:35.044 10:20:54 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:35.044 10:20:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:35.044 10:20:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:35.044 10:20:54 -- common/autotest_common.sh@10 -- # set +x 00:20:35.044 ************************************ 00:20:35.044 START TEST nvmf_async_init 00:20:35.044 ************************************ 00:20:35.044 10:20:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:35.044 * Looking for test storage... 00:20:35.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:35.044 10:20:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:35.045 10:20:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:35.045 10:20:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:35.303 10:20:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:35.303 10:20:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:35.303 10:20:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:35.303 10:20:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:35.303 10:20:54 -- scripts/common.sh@335 -- # IFS=.-: 00:20:35.303 10:20:54 -- scripts/common.sh@335 -- # read -ra ver1 00:20:35.303 10:20:54 -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.303 10:20:54 -- scripts/common.sh@336 -- # read -ra ver2 00:20:35.303 10:20:54 -- scripts/common.sh@337 -- # local 'op=<' 00:20:35.303 10:20:54 -- scripts/common.sh@339 -- # ver1_l=2 00:20:35.303 10:20:54 -- scripts/common.sh@340 -- # ver2_l=1 00:20:35.303 10:20:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:35.303 10:20:54 -- scripts/common.sh@343 -- # case "$op" in 00:20:35.303 10:20:54 -- scripts/common.sh@344 -- # : 1 00:20:35.303 10:20:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:35.303 10:20:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.303 10:20:54 -- scripts/common.sh@364 -- # decimal 1 00:20:35.303 10:20:54 -- scripts/common.sh@352 -- # local d=1 00:20:35.303 10:20:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.303 10:20:54 -- scripts/common.sh@354 -- # echo 1 00:20:35.303 10:20:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:35.303 10:20:54 -- scripts/common.sh@365 -- # decimal 2 00:20:35.303 10:20:54 -- scripts/common.sh@352 -- # local d=2 00:20:35.304 10:20:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.304 10:20:54 -- scripts/common.sh@354 -- # echo 2 00:20:35.304 10:20:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:35.304 10:20:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:35.304 10:20:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:35.304 10:20:54 -- scripts/common.sh@367 -- # return 0 00:20:35.304 10:20:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.304 10:20:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:35.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.304 --rc genhtml_branch_coverage=1 00:20:35.304 --rc genhtml_function_coverage=1 00:20:35.304 --rc genhtml_legend=1 00:20:35.304 --rc geninfo_all_blocks=1 00:20:35.304 --rc geninfo_unexecuted_blocks=1 00:20:35.304 00:20:35.304 ' 00:20:35.304 10:20:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:35.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.304 --rc genhtml_branch_coverage=1 00:20:35.304 --rc genhtml_function_coverage=1 00:20:35.304 --rc genhtml_legend=1 00:20:35.304 --rc geninfo_all_blocks=1 00:20:35.304 --rc geninfo_unexecuted_blocks=1 00:20:35.304 00:20:35.304 ' 00:20:35.304 10:20:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:35.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.304 --rc genhtml_branch_coverage=1 00:20:35.304 --rc genhtml_function_coverage=1 00:20:35.304 --rc genhtml_legend=1 00:20:35.304 --rc geninfo_all_blocks=1 00:20:35.304 --rc geninfo_unexecuted_blocks=1 00:20:35.304 00:20:35.304 ' 00:20:35.304 10:20:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:35.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.304 --rc genhtml_branch_coverage=1 00:20:35.304 --rc genhtml_function_coverage=1 00:20:35.304 --rc genhtml_legend=1 00:20:35.304 --rc geninfo_all_blocks=1 00:20:35.304 --rc geninfo_unexecuted_blocks=1 00:20:35.304 00:20:35.304 ' 00:20:35.304 10:20:54 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:35.304 10:20:54 -- nvmf/common.sh@7 -- # uname -s 00:20:35.304 10:20:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.304 10:20:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.304 10:20:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.304 10:20:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.304 10:20:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.304 10:20:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.304 10:20:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.304 10:20:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.304 10:20:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.304 10:20:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.304 10:20:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:20:35.304 10:20:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:20:35.304 10:20:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.304 10:20:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.304 10:20:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:35.304 10:20:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:35.304 10:20:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.304 10:20:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.304 10:20:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.304 10:20:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.304 10:20:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.304 10:20:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.304 10:20:54 -- paths/export.sh@5 -- # export PATH 00:20:35.304 10:20:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.304 10:20:54 -- nvmf/common.sh@46 -- # : 0 00:20:35.304 10:20:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:35.304 10:20:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:35.304 10:20:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:35.304 10:20:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.304 10:20:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.304 10:20:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:35.304 10:20:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:35.304 10:20:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:35.304 10:20:54 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:35.304 10:20:54 -- host/async_init.sh@14 -- # null_block_size=512 00:20:35.304 10:20:54 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:35.304 10:20:54 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:35.304 10:20:54 -- host/async_init.sh@20 -- # uuidgen 00:20:35.304 10:20:54 -- host/async_init.sh@20 -- # tr -d - 00:20:35.304 10:20:54 -- host/async_init.sh@20 -- # nguid=112196726278438fa39369e876490ed6 00:20:35.304 10:20:54 -- host/async_init.sh@22 -- # nvmftestinit 00:20:35.304 10:20:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:35.304 10:20:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.304 10:20:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:35.304 10:20:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:35.304 10:20:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:35.304 10:20:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.304 10:20:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.304 10:20:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.304 10:20:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:35.304 10:20:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:35.304 10:20:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:35.304 10:20:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:35.304 10:20:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:35.304 10:20:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:35.304 10:20:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.304 10:20:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.304 10:20:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:35.304 10:20:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:35.304 10:20:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:35.304 10:20:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:35.304 10:20:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:35.304 10:20:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.304 10:20:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:35.304 10:20:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:35.304 10:20:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:35.304 10:20:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:35.304 10:20:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:35.304 10:20:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:35.304 Cannot find device "nvmf_tgt_br" 00:20:35.304 10:20:54 -- nvmf/common.sh@154 -- # true 00:20:35.304 10:20:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:35.304 Cannot find device "nvmf_tgt_br2" 00:20:35.304 10:20:54 -- nvmf/common.sh@155 -- # true 00:20:35.304 10:20:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:35.305 10:20:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:35.305 Cannot find device "nvmf_tgt_br" 00:20:35.305 10:20:54 -- nvmf/common.sh@157 -- # true 00:20:35.305 10:20:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:35.305 Cannot find device "nvmf_tgt_br2" 00:20:35.305 10:20:54 -- nvmf/common.sh@158 -- # true 00:20:35.305 10:20:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:35.305 10:20:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:35.305 10:20:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:35.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:35.305 10:20:54 -- nvmf/common.sh@161 -- # true 00:20:35.305 10:20:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:35.305 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:35.305 10:20:54 -- nvmf/common.sh@162 -- # true 00:20:35.305 10:20:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:35.305 10:20:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:35.305 10:20:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:35.305 10:20:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:35.305 10:20:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:35.305 10:20:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:35.564 10:20:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:35.564 10:20:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:35.564 10:20:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:35.564 10:20:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:35.564 10:20:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:35.564 10:20:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:35.564 10:20:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:35.564 10:20:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:35.564 10:20:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:35.564 10:20:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:35.564 10:20:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:35.564 10:20:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:35.564 10:20:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:35.564 10:20:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:35.564 10:20:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:35.564 10:20:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:35.564 10:20:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:35.564 10:20:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:35.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:20:35.564 00:20:35.564 --- 10.0.0.2 ping statistics --- 00:20:35.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.564 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:35.564 10:20:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:35.564 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:35.564 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:20:35.564 00:20:35.564 --- 10.0.0.3 ping statistics --- 00:20:35.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.564 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:35.564 10:20:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:35.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:20:35.564 00:20:35.564 --- 10.0.0.1 ping statistics --- 00:20:35.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.564 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:20:35.564 10:20:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.564 10:20:54 -- nvmf/common.sh@421 -- # return 0 00:20:35.564 10:20:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:35.564 10:20:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.564 10:20:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:35.564 10:20:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:35.564 10:20:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.564 10:20:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:35.564 10:20:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:35.564 10:20:54 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:35.564 10:20:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:35.564 10:20:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:35.564 10:20:54 -- common/autotest_common.sh@10 -- # set +x 00:20:35.564 10:20:54 -- nvmf/common.sh@469 -- # nvmfpid=92708 00:20:35.564 10:20:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:35.564 10:20:54 -- nvmf/common.sh@470 -- # waitforlisten 92708 00:20:35.564 10:20:54 -- common/autotest_common.sh@829 -- # '[' -z 92708 ']' 00:20:35.564 10:20:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.564 10:20:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:35.564 10:20:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.564 10:20:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:35.564 10:20:55 -- common/autotest_common.sh@10 -- # set +x 00:20:35.564 [2024-11-19 10:20:55.053621] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:35.564 [2024-11-19 10:20:55.053714] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.822 [2024-11-19 10:20:55.209917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.822 [2024-11-19 10:20:55.259066] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:35.822 [2024-11-19 10:20:55.259388] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.822 [2024-11-19 10:20:55.259412] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.822 [2024-11-19 10:20:55.259428] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.822 [2024-11-19 10:20:55.259462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.758 10:20:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:36.758 10:20:56 -- common/autotest_common.sh@862 -- # return 0 00:20:36.758 10:20:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:36.758 10:20:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:36.758 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:36.758 10:20:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.758 10:20:56 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:36.758 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.758 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:36.758 [2024-11-19 10:20:56.081525] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.758 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.758 10:20:56 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:36.758 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.758 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:36.758 null0 00:20:36.758 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.758 10:20:56 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:36.758 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.758 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:36.758 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.758 10:20:56 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:36.758 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.758 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:36.758 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.758 10:20:56 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 112196726278438fa39369e876490ed6 00:20:36.758 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.758 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:36.758 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.758 10:20:56 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:36.758 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.758 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:36.758 [2024-11-19 10:20:56.125656] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.758 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.758 10:20:56 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:36.758 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.758 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.017 nvme0n1 00:20:37.017 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.017 10:20:56 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:37.017 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.017 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.017 [ 00:20:37.017 { 00:20:37.018 "aliases": [ 00:20:37.018 "11219672-6278-438f-a393-69e876490ed6" 00:20:37.018 ], 00:20:37.018 "assigned_rate_limits": { 00:20:37.018 "r_mbytes_per_sec": 0, 00:20:37.018 "rw_ios_per_sec": 0, 00:20:37.018 "rw_mbytes_per_sec": 0, 00:20:37.018 "w_mbytes_per_sec": 0 00:20:37.018 }, 00:20:37.018 "block_size": 512, 00:20:37.018 "claimed": false, 00:20:37.018 "driver_specific": { 00:20:37.018 "mp_policy": "active_passive", 00:20:37.018 "nvme": [ 00:20:37.018 { 00:20:37.018 "ctrlr_data": { 00:20:37.018 "ana_reporting": false, 00:20:37.018 "cntlid": 1, 00:20:37.018 "firmware_revision": "24.01.1", 00:20:37.018 "model_number": "SPDK bdev Controller", 00:20:37.018 "multi_ctrlr": true, 00:20:37.018 "oacs": { 00:20:37.018 "firmware": 0, 00:20:37.018 "format": 0, 00:20:37.018 "ns_manage": 0, 00:20:37.018 "security": 0 00:20:37.018 }, 00:20:37.018 "serial_number": "00000000000000000000", 00:20:37.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:37.018 "vendor_id": "0x8086" 00:20:37.018 }, 00:20:37.018 "ns_data": { 00:20:37.018 "can_share": true, 00:20:37.018 "id": 1 00:20:37.018 }, 00:20:37.018 "trid": { 00:20:37.018 "adrfam": "IPv4", 00:20:37.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:37.018 "traddr": "10.0.0.2", 00:20:37.018 "trsvcid": "4420", 00:20:37.018 "trtype": "TCP" 00:20:37.018 }, 00:20:37.018 "vs": { 00:20:37.018 "nvme_version": "1.3" 00:20:37.018 } 00:20:37.018 } 00:20:37.018 ] 00:20:37.018 }, 00:20:37.018 "name": "nvme0n1", 00:20:37.018 "num_blocks": 2097152, 00:20:37.018 "product_name": "NVMe disk", 00:20:37.018 "supported_io_types": { 00:20:37.018 "abort": true, 00:20:37.018 "compare": true, 00:20:37.018 "compare_and_write": true, 00:20:37.018 "flush": true, 00:20:37.018 "nvme_admin": true, 00:20:37.018 "nvme_io": true, 00:20:37.018 "read": true, 00:20:37.018 "reset": true, 00:20:37.018 "unmap": false, 00:20:37.018 "write": true, 00:20:37.018 "write_zeroes": true 00:20:37.018 }, 00:20:37.018 "uuid": "11219672-6278-438f-a393-69e876490ed6", 00:20:37.018 "zoned": false 00:20:37.018 } 00:20:37.018 ] 00:20:37.018 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.018 10:20:56 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:37.018 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.018 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.018 [2024-11-19 10:20:56.410350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:37.018 [2024-11-19 10:20:56.410472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x182f1c0 (9): Bad file descriptor 00:20:37.018 [2024-11-19 10:20:56.542985] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:37.018 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.018 10:20:56 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:37.018 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.018 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.018 [ 00:20:37.018 { 00:20:37.018 "aliases": [ 00:20:37.018 "11219672-6278-438f-a393-69e876490ed6" 00:20:37.018 ], 00:20:37.018 "assigned_rate_limits": { 00:20:37.277 "r_mbytes_per_sec": 0, 00:20:37.277 "rw_ios_per_sec": 0, 00:20:37.277 "rw_mbytes_per_sec": 0, 00:20:37.277 "w_mbytes_per_sec": 0 00:20:37.277 }, 00:20:37.277 "block_size": 512, 00:20:37.277 "claimed": false, 00:20:37.277 "driver_specific": { 00:20:37.277 "mp_policy": "active_passive", 00:20:37.277 "nvme": [ 00:20:37.277 { 00:20:37.277 "ctrlr_data": { 00:20:37.277 "ana_reporting": false, 00:20:37.277 "cntlid": 2, 00:20:37.277 "firmware_revision": "24.01.1", 00:20:37.277 "model_number": "SPDK bdev Controller", 00:20:37.277 "multi_ctrlr": true, 00:20:37.277 "oacs": { 00:20:37.277 "firmware": 0, 00:20:37.277 "format": 0, 00:20:37.277 "ns_manage": 0, 00:20:37.277 "security": 0 00:20:37.277 }, 00:20:37.277 "serial_number": "00000000000000000000", 00:20:37.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:37.277 "vendor_id": "0x8086" 00:20:37.277 }, 00:20:37.277 "ns_data": { 00:20:37.277 "can_share": true, 00:20:37.277 "id": 1 00:20:37.277 }, 00:20:37.277 "trid": { 00:20:37.277 "adrfam": "IPv4", 00:20:37.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:37.277 "traddr": "10.0.0.2", 00:20:37.277 "trsvcid": "4420", 00:20:37.277 "trtype": "TCP" 00:20:37.277 }, 00:20:37.277 "vs": { 00:20:37.277 "nvme_version": "1.3" 00:20:37.277 } 00:20:37.277 } 00:20:37.277 ] 00:20:37.277 }, 00:20:37.277 "name": "nvme0n1", 00:20:37.277 "num_blocks": 2097152, 00:20:37.277 "product_name": "NVMe disk", 00:20:37.277 "supported_io_types": { 00:20:37.277 "abort": true, 00:20:37.277 "compare": true, 00:20:37.277 "compare_and_write": true, 00:20:37.277 "flush": true, 00:20:37.277 "nvme_admin": true, 00:20:37.277 "nvme_io": true, 00:20:37.277 "read": true, 00:20:37.277 "reset": true, 00:20:37.277 "unmap": false, 00:20:37.277 "write": true, 00:20:37.277 "write_zeroes": true 00:20:37.277 }, 00:20:37.277 "uuid": "11219672-6278-438f-a393-69e876490ed6", 00:20:37.277 "zoned": false 00:20:37.277 } 00:20:37.277 ] 00:20:37.277 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.277 10:20:56 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.277 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.277 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.277 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.277 10:20:56 -- host/async_init.sh@53 -- # mktemp 00:20:37.277 10:20:56 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.B67rb1vVEY 00:20:37.277 10:20:56 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:37.277 10:20:56 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.B67rb1vVEY 00:20:37.277 10:20:56 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:37.277 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.277 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.277 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.277 10:20:56 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:37.277 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.278 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.278 [2024-11-19 10:20:56.630485] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:37.278 [2024-11-19 10:20:56.630670] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:37.278 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.278 10:20:56 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.B67rb1vVEY 00:20:37.278 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.278 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.278 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.278 10:20:56 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.B67rb1vVEY 00:20:37.278 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.278 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.278 [2024-11-19 10:20:56.646475] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.278 nvme0n1 00:20:37.278 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.278 10:20:56 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:37.278 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.278 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.278 [ 00:20:37.278 { 00:20:37.278 "aliases": [ 00:20:37.278 "11219672-6278-438f-a393-69e876490ed6" 00:20:37.278 ], 00:20:37.278 "assigned_rate_limits": { 00:20:37.278 "r_mbytes_per_sec": 0, 00:20:37.278 "rw_ios_per_sec": 0, 00:20:37.278 "rw_mbytes_per_sec": 0, 00:20:37.278 "w_mbytes_per_sec": 0 00:20:37.278 }, 00:20:37.278 "block_size": 512, 00:20:37.278 "claimed": false, 00:20:37.278 "driver_specific": { 00:20:37.278 "mp_policy": "active_passive", 00:20:37.278 "nvme": [ 00:20:37.278 { 00:20:37.278 "ctrlr_data": { 00:20:37.278 "ana_reporting": false, 00:20:37.278 "cntlid": 3, 00:20:37.278 "firmware_revision": "24.01.1", 00:20:37.278 "model_number": "SPDK bdev Controller", 00:20:37.278 "multi_ctrlr": true, 00:20:37.278 "oacs": { 00:20:37.278 "firmware": 0, 00:20:37.278 "format": 0, 00:20:37.278 "ns_manage": 0, 00:20:37.278 "security": 0 00:20:37.278 }, 00:20:37.278 "serial_number": "00000000000000000000", 00:20:37.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:37.278 "vendor_id": "0x8086" 00:20:37.278 }, 00:20:37.278 "ns_data": { 00:20:37.278 "can_share": true, 00:20:37.278 "id": 1 00:20:37.278 }, 00:20:37.278 "trid": { 00:20:37.278 "adrfam": "IPv4", 00:20:37.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:37.278 "traddr": "10.0.0.2", 00:20:37.278 "trsvcid": "4421", 00:20:37.278 "trtype": "TCP" 00:20:37.278 }, 00:20:37.278 "vs": { 00:20:37.278 "nvme_version": "1.3" 00:20:37.278 } 00:20:37.278 } 00:20:37.278 ] 00:20:37.278 }, 00:20:37.278 "name": "nvme0n1", 00:20:37.278 "num_blocks": 2097152, 00:20:37.278 "product_name": "NVMe disk", 00:20:37.278 "supported_io_types": { 00:20:37.278 "abort": true, 00:20:37.278 "compare": true, 00:20:37.278 "compare_and_write": true, 00:20:37.278 "flush": true, 00:20:37.278 "nvme_admin": true, 00:20:37.278 "nvme_io": true, 00:20:37.278 "read": true, 00:20:37.278 "reset": true, 00:20:37.278 "unmap": false, 00:20:37.278 "write": true, 00:20:37.278 "write_zeroes": true 00:20:37.278 }, 00:20:37.278 "uuid": "11219672-6278-438f-a393-69e876490ed6", 00:20:37.278 "zoned": false 00:20:37.278 } 00:20:37.278 ] 00:20:37.278 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.278 10:20:56 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:37.278 10:20:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.278 10:20:56 -- common/autotest_common.sh@10 -- # set +x 00:20:37.278 10:20:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.278 10:20:56 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.B67rb1vVEY 00:20:37.278 10:20:56 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:37.278 10:20:56 -- host/async_init.sh@78 -- # nvmftestfini 00:20:37.278 10:20:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:37.278 10:20:56 -- nvmf/common.sh@116 -- # sync 00:20:37.278 10:20:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:37.278 10:20:56 -- nvmf/common.sh@119 -- # set +e 00:20:37.278 10:20:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:37.278 10:20:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:37.278 rmmod nvme_tcp 00:20:37.537 rmmod nvme_fabrics 00:20:37.537 rmmod nvme_keyring 00:20:37.537 10:20:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:37.537 10:20:56 -- nvmf/common.sh@123 -- # set -e 00:20:37.537 10:20:56 -- nvmf/common.sh@124 -- # return 0 00:20:37.537 10:20:56 -- nvmf/common.sh@477 -- # '[' -n 92708 ']' 00:20:37.537 10:20:56 -- nvmf/common.sh@478 -- # killprocess 92708 00:20:37.537 10:20:56 -- common/autotest_common.sh@936 -- # '[' -z 92708 ']' 00:20:37.537 10:20:56 -- common/autotest_common.sh@940 -- # kill -0 92708 00:20:37.537 10:20:56 -- common/autotest_common.sh@941 -- # uname 00:20:37.537 10:20:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:37.537 10:20:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92708 00:20:37.537 10:20:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:37.537 10:20:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:37.537 killing process with pid 92708 00:20:37.537 10:20:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92708' 00:20:37.537 10:20:56 -- common/autotest_common.sh@955 -- # kill 92708 00:20:37.537 10:20:56 -- common/autotest_common.sh@960 -- # wait 92708 00:20:37.537 10:20:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:37.537 10:20:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:37.537 10:20:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:37.537 10:20:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:37.537 10:20:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:37.537 10:20:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.537 10:20:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.537 10:20:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.537 10:20:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:37.537 00:20:37.537 real 0m2.604s 00:20:37.537 user 0m2.523s 00:20:37.537 sys 0m0.528s 00:20:37.537 10:20:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:37.537 10:20:57 -- common/autotest_common.sh@10 -- # set +x 00:20:37.537 ************************************ 00:20:37.537 END TEST nvmf_async_init 00:20:37.537 ************************************ 00:20:37.796 10:20:57 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:37.796 10:20:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:37.796 10:20:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:37.796 10:20:57 -- common/autotest_common.sh@10 -- # set +x 00:20:37.796 ************************************ 00:20:37.796 START TEST dma 00:20:37.796 ************************************ 00:20:37.796 10:20:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:37.796 * Looking for test storage... 00:20:37.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:37.796 10:20:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:37.796 10:20:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:37.796 10:20:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:37.796 10:20:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:37.796 10:20:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:37.796 10:20:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:37.796 10:20:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:37.796 10:20:57 -- scripts/common.sh@335 -- # IFS=.-: 00:20:37.796 10:20:57 -- scripts/common.sh@335 -- # read -ra ver1 00:20:37.796 10:20:57 -- scripts/common.sh@336 -- # IFS=.-: 00:20:37.796 10:20:57 -- scripts/common.sh@336 -- # read -ra ver2 00:20:37.796 10:20:57 -- scripts/common.sh@337 -- # local 'op=<' 00:20:37.796 10:20:57 -- scripts/common.sh@339 -- # ver1_l=2 00:20:37.797 10:20:57 -- scripts/common.sh@340 -- # ver2_l=1 00:20:37.797 10:20:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:37.797 10:20:57 -- scripts/common.sh@343 -- # case "$op" in 00:20:37.797 10:20:57 -- scripts/common.sh@344 -- # : 1 00:20:37.797 10:20:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:37.797 10:20:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:37.797 10:20:57 -- scripts/common.sh@364 -- # decimal 1 00:20:37.797 10:20:57 -- scripts/common.sh@352 -- # local d=1 00:20:37.797 10:20:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:37.797 10:20:57 -- scripts/common.sh@354 -- # echo 1 00:20:37.797 10:20:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:37.797 10:20:57 -- scripts/common.sh@365 -- # decimal 2 00:20:37.797 10:20:57 -- scripts/common.sh@352 -- # local d=2 00:20:37.797 10:20:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:37.797 10:20:57 -- scripts/common.sh@354 -- # echo 2 00:20:37.797 10:20:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:37.797 10:20:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:37.797 10:20:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:37.797 10:20:57 -- scripts/common.sh@367 -- # return 0 00:20:37.797 10:20:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:37.797 10:20:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:37.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.797 --rc genhtml_branch_coverage=1 00:20:37.797 --rc genhtml_function_coverage=1 00:20:37.797 --rc genhtml_legend=1 00:20:37.797 --rc geninfo_all_blocks=1 00:20:37.797 --rc geninfo_unexecuted_blocks=1 00:20:37.797 00:20:37.797 ' 00:20:37.797 10:20:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:37.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.797 --rc genhtml_branch_coverage=1 00:20:37.797 --rc genhtml_function_coverage=1 00:20:37.797 --rc genhtml_legend=1 00:20:37.797 --rc geninfo_all_blocks=1 00:20:37.797 --rc geninfo_unexecuted_blocks=1 00:20:37.797 00:20:37.797 ' 00:20:37.797 10:20:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:37.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.797 --rc genhtml_branch_coverage=1 00:20:37.797 --rc genhtml_function_coverage=1 00:20:37.797 --rc genhtml_legend=1 00:20:37.797 --rc geninfo_all_blocks=1 00:20:37.797 --rc geninfo_unexecuted_blocks=1 00:20:37.797 00:20:37.797 ' 00:20:37.797 10:20:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:37.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.797 --rc genhtml_branch_coverage=1 00:20:37.797 --rc genhtml_function_coverage=1 00:20:37.797 --rc genhtml_legend=1 00:20:37.797 --rc geninfo_all_blocks=1 00:20:37.797 --rc geninfo_unexecuted_blocks=1 00:20:37.797 00:20:37.797 ' 00:20:37.797 10:20:57 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:37.797 10:20:57 -- nvmf/common.sh@7 -- # uname -s 00:20:37.797 10:20:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.797 10:20:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.797 10:20:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.797 10:20:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.797 10:20:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.797 10:20:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.797 10:20:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.797 10:20:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.797 10:20:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.797 10:20:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.797 10:20:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:20:37.797 10:20:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:20:37.797 10:20:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.797 10:20:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.797 10:20:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:37.797 10:20:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:37.797 10:20:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.797 10:20:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.797 10:20:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.797 10:20:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.797 10:20:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.797 10:20:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.797 10:20:57 -- paths/export.sh@5 -- # export PATH 00:20:37.797 10:20:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.797 10:20:57 -- nvmf/common.sh@46 -- # : 0 00:20:37.797 10:20:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:37.797 10:20:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:37.797 10:20:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:37.797 10:20:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.797 10:20:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.797 10:20:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:37.797 10:20:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:37.797 10:20:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:37.797 10:20:57 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:37.797 10:20:57 -- host/dma.sh@13 -- # exit 0 00:20:37.797 00:20:37.797 real 0m0.206s 00:20:37.797 user 0m0.125s 00:20:37.797 sys 0m0.087s 00:20:37.797 10:20:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:37.797 ************************************ 00:20:37.797 END TEST dma 00:20:37.797 10:20:57 -- common/autotest_common.sh@10 -- # set +x 00:20:37.797 ************************************ 00:20:38.057 10:20:57 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:38.057 10:20:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:38.057 10:20:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:38.057 10:20:57 -- common/autotest_common.sh@10 -- # set +x 00:20:38.057 ************************************ 00:20:38.057 START TEST nvmf_identify 00:20:38.057 ************************************ 00:20:38.057 10:20:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:38.057 * Looking for test storage... 00:20:38.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:38.057 10:20:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:38.057 10:20:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:38.057 10:20:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:38.057 10:20:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:38.057 10:20:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:38.057 10:20:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:38.057 10:20:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:38.057 10:20:57 -- scripts/common.sh@335 -- # IFS=.-: 00:20:38.057 10:20:57 -- scripts/common.sh@335 -- # read -ra ver1 00:20:38.057 10:20:57 -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.057 10:20:57 -- scripts/common.sh@336 -- # read -ra ver2 00:20:38.057 10:20:57 -- scripts/common.sh@337 -- # local 'op=<' 00:20:38.057 10:20:57 -- scripts/common.sh@339 -- # ver1_l=2 00:20:38.057 10:20:57 -- scripts/common.sh@340 -- # ver2_l=1 00:20:38.057 10:20:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:38.057 10:20:57 -- scripts/common.sh@343 -- # case "$op" in 00:20:38.057 10:20:57 -- scripts/common.sh@344 -- # : 1 00:20:38.057 10:20:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:38.057 10:20:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.057 10:20:57 -- scripts/common.sh@364 -- # decimal 1 00:20:38.057 10:20:57 -- scripts/common.sh@352 -- # local d=1 00:20:38.057 10:20:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.057 10:20:57 -- scripts/common.sh@354 -- # echo 1 00:20:38.057 10:20:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:38.057 10:20:57 -- scripts/common.sh@365 -- # decimal 2 00:20:38.057 10:20:57 -- scripts/common.sh@352 -- # local d=2 00:20:38.057 10:20:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.057 10:20:57 -- scripts/common.sh@354 -- # echo 2 00:20:38.057 10:20:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:38.057 10:20:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:38.057 10:20:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:38.057 10:20:57 -- scripts/common.sh@367 -- # return 0 00:20:38.057 10:20:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.057 10:20:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:38.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.057 --rc genhtml_branch_coverage=1 00:20:38.057 --rc genhtml_function_coverage=1 00:20:38.057 --rc genhtml_legend=1 00:20:38.057 --rc geninfo_all_blocks=1 00:20:38.057 --rc geninfo_unexecuted_blocks=1 00:20:38.057 00:20:38.057 ' 00:20:38.057 10:20:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:38.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.057 --rc genhtml_branch_coverage=1 00:20:38.057 --rc genhtml_function_coverage=1 00:20:38.057 --rc genhtml_legend=1 00:20:38.057 --rc geninfo_all_blocks=1 00:20:38.057 --rc geninfo_unexecuted_blocks=1 00:20:38.057 00:20:38.057 ' 00:20:38.058 10:20:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:38.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.058 --rc genhtml_branch_coverage=1 00:20:38.058 --rc genhtml_function_coverage=1 00:20:38.058 --rc genhtml_legend=1 00:20:38.058 --rc geninfo_all_blocks=1 00:20:38.058 --rc geninfo_unexecuted_blocks=1 00:20:38.058 00:20:38.058 ' 00:20:38.058 10:20:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:38.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.058 --rc genhtml_branch_coverage=1 00:20:38.058 --rc genhtml_function_coverage=1 00:20:38.058 --rc genhtml_legend=1 00:20:38.058 --rc geninfo_all_blocks=1 00:20:38.058 --rc geninfo_unexecuted_blocks=1 00:20:38.058 00:20:38.058 ' 00:20:38.058 10:20:57 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:38.058 10:20:57 -- nvmf/common.sh@7 -- # uname -s 00:20:38.058 10:20:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.058 10:20:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.058 10:20:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.058 10:20:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.058 10:20:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.058 10:20:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.058 10:20:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.058 10:20:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.058 10:20:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.058 10:20:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.058 10:20:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:20:38.058 10:20:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:20:38.058 10:20:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.058 10:20:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.058 10:20:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:38.058 10:20:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:38.058 10:20:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.058 10:20:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.058 10:20:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.058 10:20:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.058 10:20:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.058 10:20:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.058 10:20:57 -- paths/export.sh@5 -- # export PATH 00:20:38.058 10:20:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.058 10:20:57 -- nvmf/common.sh@46 -- # : 0 00:20:38.058 10:20:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:38.058 10:20:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:38.058 10:20:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:38.058 10:20:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.058 10:20:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.058 10:20:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:38.058 10:20:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:38.058 10:20:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:38.058 10:20:57 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:38.058 10:20:57 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:38.058 10:20:57 -- host/identify.sh@14 -- # nvmftestinit 00:20:38.058 10:20:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:38.058 10:20:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.058 10:20:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:38.058 10:20:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:38.058 10:20:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:38.058 10:20:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.058 10:20:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:38.058 10:20:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.058 10:20:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:38.058 10:20:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:38.058 10:20:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:38.058 10:20:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:38.058 10:20:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:38.058 10:20:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:38.058 10:20:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.058 10:20:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.058 10:20:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:38.058 10:20:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:38.058 10:20:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:38.058 10:20:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:38.058 10:20:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:38.058 10:20:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.058 10:20:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:38.058 10:20:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:38.058 10:20:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:38.058 10:20:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:38.058 10:20:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:38.058 10:20:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:38.058 Cannot find device "nvmf_tgt_br" 00:20:38.058 10:20:57 -- nvmf/common.sh@154 -- # true 00:20:38.058 10:20:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.058 Cannot find device "nvmf_tgt_br2" 00:20:38.058 10:20:57 -- nvmf/common.sh@155 -- # true 00:20:38.058 10:20:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:38.320 10:20:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:38.320 Cannot find device "nvmf_tgt_br" 00:20:38.320 10:20:57 -- nvmf/common.sh@157 -- # true 00:20:38.320 10:20:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:38.320 Cannot find device "nvmf_tgt_br2" 00:20:38.320 10:20:57 -- nvmf/common.sh@158 -- # true 00:20:38.320 10:20:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:38.320 10:20:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:38.320 10:20:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.320 10:20:57 -- nvmf/common.sh@161 -- # true 00:20:38.320 10:20:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.320 10:20:57 -- nvmf/common.sh@162 -- # true 00:20:38.320 10:20:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:38.320 10:20:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:38.320 10:20:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:38.320 10:20:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:38.320 10:20:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:38.320 10:20:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:38.320 10:20:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:38.320 10:20:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:38.320 10:20:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:38.320 10:20:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:38.320 10:20:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:38.320 10:20:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:38.320 10:20:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:38.320 10:20:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:38.320 10:20:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:38.320 10:20:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:38.320 10:20:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:38.320 10:20:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:38.320 10:20:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:38.578 10:20:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:38.578 10:20:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:38.578 10:20:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:38.578 10:20:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:38.578 10:20:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:38.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:20:38.578 00:20:38.578 --- 10.0.0.2 ping statistics --- 00:20:38.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.578 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:20:38.578 10:20:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:38.578 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:38.578 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:20:38.578 00:20:38.578 --- 10.0.0.3 ping statistics --- 00:20:38.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.578 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:38.578 10:20:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:38.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:20:38.578 00:20:38.578 --- 10.0.0.1 ping statistics --- 00:20:38.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.578 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:20:38.578 10:20:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.578 10:20:57 -- nvmf/common.sh@421 -- # return 0 00:20:38.578 10:20:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:38.578 10:20:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.578 10:20:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:38.578 10:20:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:38.578 10:20:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.578 10:20:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:38.578 10:20:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:38.578 10:20:57 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:38.578 10:20:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:38.578 10:20:57 -- common/autotest_common.sh@10 -- # set +x 00:20:38.578 10:20:57 -- host/identify.sh@19 -- # nvmfpid=92984 00:20:38.578 10:20:57 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:38.578 10:20:57 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:38.578 10:20:57 -- host/identify.sh@23 -- # waitforlisten 92984 00:20:38.578 10:20:57 -- common/autotest_common.sh@829 -- # '[' -z 92984 ']' 00:20:38.578 10:20:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.578 10:20:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.578 10:20:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.578 10:20:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.578 10:20:57 -- common/autotest_common.sh@10 -- # set +x 00:20:38.578 [2024-11-19 10:20:57.989941] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:38.578 [2024-11-19 10:20:57.990041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.837 [2024-11-19 10:20:58.130099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:38.837 [2024-11-19 10:20:58.167118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:38.837 [2024-11-19 10:20:58.167264] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.837 [2024-11-19 10:20:58.167277] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.837 [2024-11-19 10:20:58.167286] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.837 [2024-11-19 10:20:58.167397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.837 [2024-11-19 10:20:58.167734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.837 [2024-11-19 10:20:58.168080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.837 [2024-11-19 10:20:58.168086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.837 10:20:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.837 10:20:58 -- common/autotest_common.sh@862 -- # return 0 00:20:38.837 10:20:58 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:38.837 10:20:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.837 10:20:58 -- common/autotest_common.sh@10 -- # set +x 00:20:38.837 [2024-11-19 10:20:58.263463] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.837 10:20:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.837 10:20:58 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:38.837 10:20:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:38.837 10:20:58 -- common/autotest_common.sh@10 -- # set +x 00:20:38.837 10:20:58 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:38.837 10:20:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.837 10:20:58 -- common/autotest_common.sh@10 -- # set +x 00:20:38.837 Malloc0 00:20:38.837 10:20:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.837 10:20:58 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:38.837 10:20:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.837 10:20:58 -- common/autotest_common.sh@10 -- # set +x 00:20:38.837 10:20:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.837 10:20:58 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:38.837 10:20:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.837 10:20:58 -- common/autotest_common.sh@10 -- # set +x 00:20:38.837 10:20:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.837 10:20:58 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:38.837 10:20:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.837 10:20:58 -- common/autotest_common.sh@10 -- # set +x 00:20:38.837 [2024-11-19 10:20:58.354039] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.837 10:20:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.837 10:20:58 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:38.837 10:20:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.837 10:20:58 -- common/autotest_common.sh@10 -- # set +x 00:20:38.837 10:20:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.837 10:20:58 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:38.837 10:20:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.837 10:20:58 -- common/autotest_common.sh@10 -- # set +x 00:20:38.837 [2024-11-19 10:20:58.369790] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:38.837 [ 00:20:38.837 { 00:20:38.837 "allow_any_host": true, 00:20:38.837 "hosts": [], 00:20:38.837 "listen_addresses": [ 00:20:38.837 { 00:20:38.837 "adrfam": "IPv4", 00:20:38.837 "traddr": "10.0.0.2", 00:20:38.837 "transport": "TCP", 00:20:38.837 "trsvcid": "4420", 00:20:38.837 "trtype": "TCP" 00:20:38.837 } 00:20:38.837 ], 00:20:38.837 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:38.837 "subtype": "Discovery" 00:20:38.837 }, 00:20:38.837 { 00:20:38.837 "allow_any_host": true, 00:20:38.837 "hosts": [], 00:20:38.837 "listen_addresses": [ 00:20:38.837 { 00:20:38.837 "adrfam": "IPv4", 00:20:38.837 "traddr": "10.0.0.2", 00:20:38.837 "transport": "TCP", 00:20:38.837 "trsvcid": "4420", 00:20:38.837 "trtype": "TCP" 00:20:38.837 } 00:20:38.837 ], 00:20:38.837 "max_cntlid": 65519, 00:20:38.837 "max_namespaces": 32, 00:20:38.837 "min_cntlid": 1, 00:20:38.837 "model_number": "SPDK bdev Controller", 00:20:38.837 "namespaces": [ 00:20:38.837 { 00:20:38.837 "bdev_name": "Malloc0", 00:20:38.837 "eui64": "ABCDEF0123456789", 00:20:38.837 "name": "Malloc0", 00:20:38.837 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:38.837 "nsid": 1, 00:20:38.837 "uuid": "e5008960-e1d3-4505-afc6-e4096459b47a" 00:20:38.837 } 00:20:38.837 ], 00:20:38.837 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.837 "serial_number": "SPDK00000000000001", 00:20:38.837 "subtype": "NVMe" 00:20:38.837 } 00:20:38.837 ] 00:20:38.837 10:20:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.837 10:20:58 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:39.139 [2024-11-19 10:20:58.399395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:39.139 [2024-11-19 10:20:58.399442] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93024 ] 00:20:39.139 [2024-11-19 10:20:58.541200] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:39.139 [2024-11-19 10:20:58.541274] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:39.139 [2024-11-19 10:20:58.541282] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:39.139 [2024-11-19 10:20:58.541298] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:39.139 [2024-11-19 10:20:58.541308] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:39.139 [2024-11-19 10:20:58.541440] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:39.139 [2024-11-19 10:20:58.541496] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x686540 0 00:20:39.139 [2024-11-19 10:20:58.555850] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:39.139 [2024-11-19 10:20:58.555878] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:39.139 [2024-11-19 10:20:58.555885] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:39.139 [2024-11-19 10:20:58.555889] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:39.139 [2024-11-19 10:20:58.555936] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.139 [2024-11-19 10:20:58.555944] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.139 [2024-11-19 10:20:58.555949] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686540) 00:20:39.139 [2024-11-19 10:20:58.555964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:39.139 [2024-11-19 10:20:58.555998] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf220, cid 0, qid 0 00:20:39.139 [2024-11-19 10:20:58.563845] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.139 [2024-11-19 10:20:58.563868] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.139 [2024-11-19 10:20:58.563874] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.139 [2024-11-19 10:20:58.563879] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf220) on tqpair=0x686540 00:20:39.139 [2024-11-19 10:20:58.563891] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:39.139 [2024-11-19 10:20:58.563901] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:39.139 [2024-11-19 10:20:58.563908] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:39.139 [2024-11-19 10:20:58.563926] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.139 [2024-11-19 10:20:58.563932] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.139 [2024-11-19 10:20:58.563937] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686540) 00:20:39.139 [2024-11-19 10:20:58.563947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.139 [2024-11-19 10:20:58.563979] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf220, cid 0, qid 0 00:20:39.139 [2024-11-19 10:20:58.564109] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.139 [2024-11-19 10:20:58.564130] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.139 [2024-11-19 10:20:58.564136] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.139 [2024-11-19 10:20:58.564140] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf220) on tqpair=0x686540 00:20:39.139 [2024-11-19 10:20:58.564147] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:39.139 [2024-11-19 10:20:58.564157] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:39.139 [2024-11-19 10:20:58.564166] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.139 [2024-11-19 10:20:58.564170] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.139 [2024-11-19 10:20:58.564175] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686540) 00:20:39.139 [2024-11-19 10:20:58.564183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.140 [2024-11-19 10:20:58.564205] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf220, cid 0, qid 0 00:20:39.140 [2024-11-19 10:20:58.564309] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.140 [2024-11-19 10:20:58.564317] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.140 [2024-11-19 10:20:58.564321] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.564326] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf220) on tqpair=0x686540 00:20:39.140 [2024-11-19 10:20:58.564332] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:39.140 [2024-11-19 10:20:58.564342] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:39.140 [2024-11-19 10:20:58.564359] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.564364] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.564368] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686540) 00:20:39.140 [2024-11-19 10:20:58.564376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.140 [2024-11-19 10:20:58.564396] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf220, cid 0, qid 0 00:20:39.140 [2024-11-19 10:20:58.564491] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.140 [2024-11-19 10:20:58.564504] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.140 [2024-11-19 10:20:58.564508] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.564513] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf220) on tqpair=0x686540 00:20:39.140 [2024-11-19 10:20:58.564520] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:39.140 [2024-11-19 10:20:58.564531] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.564537] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.564541] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686540) 00:20:39.140 [2024-11-19 10:20:58.564549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.140 [2024-11-19 10:20:58.564569] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf220, cid 0, qid 0 00:20:39.140 [2024-11-19 10:20:58.564665] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.140 [2024-11-19 10:20:58.564673] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.140 [2024-11-19 10:20:58.564677] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.564681] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf220) on tqpair=0x686540 00:20:39.140 [2024-11-19 10:20:58.564687] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:39.140 [2024-11-19 10:20:58.564692] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:39.140 [2024-11-19 10:20:58.564701] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:39.140 [2024-11-19 10:20:58.564808] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:39.140 [2024-11-19 10:20:58.564814] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:39.140 [2024-11-19 10:20:58.564836] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.564842] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.564847] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686540) 00:20:39.140 [2024-11-19 10:20:58.564855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.140 [2024-11-19 10:20:58.564878] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf220, cid 0, qid 0 00:20:39.140 [2024-11-19 10:20:58.564971] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.140 [2024-11-19 10:20:58.564983] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.140 [2024-11-19 10:20:58.564988] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.564993] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf220) on tqpair=0x686540 00:20:39.140 [2024-11-19 10:20:58.564999] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:39.140 [2024-11-19 10:20:58.565010] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.565015] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.565020] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686540) 00:20:39.140 [2024-11-19 10:20:58.565028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.140 [2024-11-19 10:20:58.565048] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf220, cid 0, qid 0 00:20:39.140 [2024-11-19 10:20:58.565138] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.140 [2024-11-19 10:20:58.565157] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.140 [2024-11-19 10:20:58.565162] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.565167] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf220) on tqpair=0x686540 00:20:39.140 [2024-11-19 10:20:58.565173] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:39.140 [2024-11-19 10:20:58.565178] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:39.140 [2024-11-19 10:20:58.565188] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:39.140 [2024-11-19 10:20:58.565206] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:39.140 [2024-11-19 10:20:58.565217] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.565222] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.565226] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686540) 00:20:39.140 [2024-11-19 10:20:58.565235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.140 [2024-11-19 10:20:58.565257] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf220, cid 0, qid 0 00:20:39.140 [2024-11-19 10:20:58.565402] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.140 [2024-11-19 10:20:58.565417] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.140 [2024-11-19 10:20:58.565422] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.565427] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x686540): datao=0, datal=4096, cccid=0 00:20:39.140 [2024-11-19 10:20:58.565432] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bf220) on tqpair(0x686540): expected_datao=0, payload_size=4096 00:20:39.140 [2024-11-19 10:20:58.565442] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.565448] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.565458] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.140 [2024-11-19 10:20:58.565474] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.140 [2024-11-19 10:20:58.565478] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.140 [2024-11-19 10:20:58.565483] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf220) on tqpair=0x686540 00:20:39.140 [2024-11-19 10:20:58.565493] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:39.140 [2024-11-19 10:20:58.565499] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:39.140 [2024-11-19 10:20:58.565504] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:39.140 [2024-11-19 10:20:58.565510] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:39.140 [2024-11-19 10:20:58.565515] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:39.140 [2024-11-19 10:20:58.565521] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:39.140 [2024-11-19 10:20:58.565536] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:39.140 [2024-11-19 10:20:58.565546] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.565550] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.565555] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686540) 00:20:39.141 [2024-11-19 10:20:58.565563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:39.141 [2024-11-19 10:20:58.565586] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf220, cid 0, qid 0 00:20:39.141 [2024-11-19 10:20:58.565697] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.141 [2024-11-19 10:20:58.565712] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.141 [2024-11-19 10:20:58.565717] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.565722] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf220) on tqpair=0x686540 00:20:39.141 [2024-11-19 10:20:58.565731] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.565736] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.565740] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x686540) 00:20:39.141 [2024-11-19 10:20:58.565747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.141 [2024-11-19 10:20:58.565754] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.565759] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.565763] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x686540) 00:20:39.141 [2024-11-19 10:20:58.565769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.141 [2024-11-19 10:20:58.565776] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.565780] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.565785] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x686540) 00:20:39.141 [2024-11-19 10:20:58.565791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.141 [2024-11-19 10:20:58.565798] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.565802] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.565806] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.141 [2024-11-19 10:20:58.565813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.141 [2024-11-19 10:20:58.565830] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:39.141 [2024-11-19 10:20:58.565847] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:39.141 [2024-11-19 10:20:58.565856] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.565860] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.565864] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x686540) 00:20:39.141 [2024-11-19 10:20:58.565872] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.141 [2024-11-19 10:20:58.565897] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf220, cid 0, qid 0 00:20:39.141 [2024-11-19 10:20:58.565906] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf380, cid 1, qid 0 00:20:39.141 [2024-11-19 10:20:58.565911] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf4e0, cid 2, qid 0 00:20:39.141 [2024-11-19 10:20:58.565917] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.141 [2024-11-19 10:20:58.565922] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf7a0, cid 4, qid 0 00:20:39.141 [2024-11-19 10:20:58.566079] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.141 [2024-11-19 10:20:58.566090] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.141 [2024-11-19 10:20:58.566095] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.566100] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf7a0) on tqpair=0x686540 00:20:39.141 [2024-11-19 10:20:58.566106] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:39.141 [2024-11-19 10:20:58.566112] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:39.141 [2024-11-19 10:20:58.566124] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.566130] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.566134] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x686540) 00:20:39.141 [2024-11-19 10:20:58.566142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.141 [2024-11-19 10:20:58.566163] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf7a0, cid 4, qid 0 00:20:39.141 [2024-11-19 10:20:58.566274] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.141 [2024-11-19 10:20:58.566285] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.141 [2024-11-19 10:20:58.566290] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.566294] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x686540): datao=0, datal=4096, cccid=4 00:20:39.141 [2024-11-19 10:20:58.566299] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bf7a0) on tqpair(0x686540): expected_datao=0, payload_size=4096 00:20:39.141 [2024-11-19 10:20:58.566308] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.566313] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.566322] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.141 [2024-11-19 10:20:58.566329] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.141 [2024-11-19 10:20:58.566333] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.566338] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf7a0) on tqpair=0x686540 00:20:39.141 [2024-11-19 10:20:58.566353] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:39.141 [2024-11-19 10:20:58.566381] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.566387] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.566391] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x686540) 00:20:39.141 [2024-11-19 10:20:58.566400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.141 [2024-11-19 10:20:58.566408] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.566412] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.566416] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x686540) 00:20:39.141 [2024-11-19 10:20:58.566423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.141 [2024-11-19 10:20:58.566450] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf7a0, cid 4, qid 0 00:20:39.141 [2024-11-19 10:20:58.566458] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf900, cid 5, qid 0 00:20:39.141 [2024-11-19 10:20:58.566616] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.141 [2024-11-19 10:20:58.566629] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.141 [2024-11-19 10:20:58.566634] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.566638] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x686540): datao=0, datal=1024, cccid=4 00:20:39.141 [2024-11-19 10:20:58.566643] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bf7a0) on tqpair(0x686540): expected_datao=0, payload_size=1024 00:20:39.141 [2024-11-19 10:20:58.566652] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.566656] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.566662] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.141 [2024-11-19 10:20:58.566669] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.141 [2024-11-19 10:20:58.566673] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.566677] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf900) on tqpair=0x686540 00:20:39.141 [2024-11-19 10:20:58.606921] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.141 [2024-11-19 10:20:58.606956] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.141 [2024-11-19 10:20:58.606962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.606977] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf7a0) on tqpair=0x686540 00:20:39.141 [2024-11-19 10:20:58.607029] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.607041] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.607046] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x686540) 00:20:39.141 [2024-11-19 10:20:58.607060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.141 [2024-11-19 10:20:58.607114] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf7a0, cid 4, qid 0 00:20:39.141 [2024-11-19 10:20:58.607240] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.141 [2024-11-19 10:20:58.607248] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.141 [2024-11-19 10:20:58.607253] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.607257] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x686540): datao=0, datal=3072, cccid=4 00:20:39.141 [2024-11-19 10:20:58.607263] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bf7a0) on tqpair(0x686540): expected_datao=0, payload_size=3072 00:20:39.141 [2024-11-19 10:20:58.607273] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.607278] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.607288] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.141 [2024-11-19 10:20:58.607295] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.141 [2024-11-19 10:20:58.607299] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.607304] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf7a0) on tqpair=0x686540 00:20:39.141 [2024-11-19 10:20:58.607315] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.607321] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.141 [2024-11-19 10:20:58.607325] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x686540) 00:20:39.142 [2024-11-19 10:20:58.607333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.142 [2024-11-19 10:20:58.607362] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf7a0, cid 4, qid 0 00:20:39.142 [2024-11-19 10:20:58.607438] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.142 [2024-11-19 10:20:58.607446] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.142 [2024-11-19 10:20:58.607451] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.142 [2024-11-19 10:20:58.607455] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x686540): datao=0, datal=8, cccid=4 00:20:39.142 [2024-11-19 10:20:58.607460] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6bf7a0) on tqpair(0x686540): expected_datao=0, payload_size=8 00:20:39.142 [2024-11-19 10:20:58.607468] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.142 [2024-11-19 10:20:58.607473] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.142 [2024-11-19 10:20:58.651868] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.142 [2024-11-19 10:20:58.651914] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.142 [2024-11-19 10:20:58.651921] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.142 [2024-11-19 10:20:58.651927] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf7a0) on tqpair=0x686540 00:20:39.142 ===================================================== 00:20:39.142 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:39.142 ===================================================== 00:20:39.142 Controller Capabilities/Features 00:20:39.142 ================================ 00:20:39.142 Vendor ID: 0000 00:20:39.142 Subsystem Vendor ID: 0000 00:20:39.142 Serial Number: .................... 00:20:39.142 Model Number: ........................................ 00:20:39.142 Firmware Version: 24.01.1 00:20:39.142 Recommended Arb Burst: 0 00:20:39.142 IEEE OUI Identifier: 00 00 00 00:20:39.142 Multi-path I/O 00:20:39.142 May have multiple subsystem ports: No 00:20:39.142 May have multiple controllers: No 00:20:39.142 Associated with SR-IOV VF: No 00:20:39.142 Max Data Transfer Size: 131072 00:20:39.142 Max Number of Namespaces: 0 00:20:39.142 Max Number of I/O Queues: 1024 00:20:39.142 NVMe Specification Version (VS): 1.3 00:20:39.142 NVMe Specification Version (Identify): 1.3 00:20:39.142 Maximum Queue Entries: 128 00:20:39.142 Contiguous Queues Required: Yes 00:20:39.142 Arbitration Mechanisms Supported 00:20:39.142 Weighted Round Robin: Not Supported 00:20:39.142 Vendor Specific: Not Supported 00:20:39.142 Reset Timeout: 15000 ms 00:20:39.142 Doorbell Stride: 4 bytes 00:20:39.142 NVM Subsystem Reset: Not Supported 00:20:39.142 Command Sets Supported 00:20:39.142 NVM Command Set: Supported 00:20:39.142 Boot Partition: Not Supported 00:20:39.142 Memory Page Size Minimum: 4096 bytes 00:20:39.142 Memory Page Size Maximum: 4096 bytes 00:20:39.142 Persistent Memory Region: Not Supported 00:20:39.142 Optional Asynchronous Events Supported 00:20:39.142 Namespace Attribute Notices: Not Supported 00:20:39.142 Firmware Activation Notices: Not Supported 00:20:39.142 ANA Change Notices: Not Supported 00:20:39.142 PLE Aggregate Log Change Notices: Not Supported 00:20:39.142 LBA Status Info Alert Notices: Not Supported 00:20:39.142 EGE Aggregate Log Change Notices: Not Supported 00:20:39.142 Normal NVM Subsystem Shutdown event: Not Supported 00:20:39.142 Zone Descriptor Change Notices: Not Supported 00:20:39.142 Discovery Log Change Notices: Supported 00:20:39.142 Controller Attributes 00:20:39.142 128-bit Host Identifier: Not Supported 00:20:39.142 Non-Operational Permissive Mode: Not Supported 00:20:39.142 NVM Sets: Not Supported 00:20:39.142 Read Recovery Levels: Not Supported 00:20:39.142 Endurance Groups: Not Supported 00:20:39.142 Predictable Latency Mode: Not Supported 00:20:39.142 Traffic Based Keep ALive: Not Supported 00:20:39.142 Namespace Granularity: Not Supported 00:20:39.142 SQ Associations: Not Supported 00:20:39.142 UUID List: Not Supported 00:20:39.142 Multi-Domain Subsystem: Not Supported 00:20:39.142 Fixed Capacity Management: Not Supported 00:20:39.142 Variable Capacity Management: Not Supported 00:20:39.142 Delete Endurance Group: Not Supported 00:20:39.142 Delete NVM Set: Not Supported 00:20:39.142 Extended LBA Formats Supported: Not Supported 00:20:39.142 Flexible Data Placement Supported: Not Supported 00:20:39.142 00:20:39.142 Controller Memory Buffer Support 00:20:39.142 ================================ 00:20:39.142 Supported: No 00:20:39.142 00:20:39.142 Persistent Memory Region Support 00:20:39.142 ================================ 00:20:39.142 Supported: No 00:20:39.142 00:20:39.142 Admin Command Set Attributes 00:20:39.142 ============================ 00:20:39.142 Security Send/Receive: Not Supported 00:20:39.142 Format NVM: Not Supported 00:20:39.142 Firmware Activate/Download: Not Supported 00:20:39.142 Namespace Management: Not Supported 00:20:39.142 Device Self-Test: Not Supported 00:20:39.142 Directives: Not Supported 00:20:39.142 NVMe-MI: Not Supported 00:20:39.142 Virtualization Management: Not Supported 00:20:39.142 Doorbell Buffer Config: Not Supported 00:20:39.142 Get LBA Status Capability: Not Supported 00:20:39.142 Command & Feature Lockdown Capability: Not Supported 00:20:39.142 Abort Command Limit: 1 00:20:39.142 Async Event Request Limit: 4 00:20:39.142 Number of Firmware Slots: N/A 00:20:39.142 Firmware Slot 1 Read-Only: N/A 00:20:39.142 Firmware Activation Without Reset: N/A 00:20:39.142 Multiple Update Detection Support: N/A 00:20:39.142 Firmware Update Granularity: No Information Provided 00:20:39.142 Per-Namespace SMART Log: No 00:20:39.142 Asymmetric Namespace Access Log Page: Not Supported 00:20:39.142 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:39.142 Command Effects Log Page: Not Supported 00:20:39.142 Get Log Page Extended Data: Supported 00:20:39.142 Telemetry Log Pages: Not Supported 00:20:39.142 Persistent Event Log Pages: Not Supported 00:20:39.142 Supported Log Pages Log Page: May Support 00:20:39.142 Commands Supported & Effects Log Page: Not Supported 00:20:39.142 Feature Identifiers & Effects Log Page:May Support 00:20:39.142 NVMe-MI Commands & Effects Log Page: May Support 00:20:39.142 Data Area 4 for Telemetry Log: Not Supported 00:20:39.142 Error Log Page Entries Supported: 128 00:20:39.142 Keep Alive: Not Supported 00:20:39.142 00:20:39.142 NVM Command Set Attributes 00:20:39.142 ========================== 00:20:39.142 Submission Queue Entry Size 00:20:39.142 Max: 1 00:20:39.142 Min: 1 00:20:39.142 Completion Queue Entry Size 00:20:39.142 Max: 1 00:20:39.142 Min: 1 00:20:39.142 Number of Namespaces: 0 00:20:39.142 Compare Command: Not Supported 00:20:39.142 Write Uncorrectable Command: Not Supported 00:20:39.142 Dataset Management Command: Not Supported 00:20:39.142 Write Zeroes Command: Not Supported 00:20:39.142 Set Features Save Field: Not Supported 00:20:39.142 Reservations: Not Supported 00:20:39.142 Timestamp: Not Supported 00:20:39.142 Copy: Not Supported 00:20:39.142 Volatile Write Cache: Not Present 00:20:39.142 Atomic Write Unit (Normal): 1 00:20:39.142 Atomic Write Unit (PFail): 1 00:20:39.142 Atomic Compare & Write Unit: 1 00:20:39.142 Fused Compare & Write: Supported 00:20:39.142 Scatter-Gather List 00:20:39.142 SGL Command Set: Supported 00:20:39.142 SGL Keyed: Supported 00:20:39.142 SGL Bit Bucket Descriptor: Not Supported 00:20:39.142 SGL Metadata Pointer: Not Supported 00:20:39.142 Oversized SGL: Not Supported 00:20:39.142 SGL Metadata Address: Not Supported 00:20:39.142 SGL Offset: Supported 00:20:39.142 Transport SGL Data Block: Not Supported 00:20:39.142 Replay Protected Memory Block: Not Supported 00:20:39.142 00:20:39.142 Firmware Slot Information 00:20:39.142 ========================= 00:20:39.142 Active slot: 0 00:20:39.142 00:20:39.142 00:20:39.142 Error Log 00:20:39.142 ========= 00:20:39.142 00:20:39.142 Active Namespaces 00:20:39.142 ================= 00:20:39.142 Discovery Log Page 00:20:39.142 ================== 00:20:39.142 Generation Counter: 2 00:20:39.142 Number of Records: 2 00:20:39.142 Record Format: 0 00:20:39.142 00:20:39.142 Discovery Log Entry 0 00:20:39.143 ---------------------- 00:20:39.143 Transport Type: 3 (TCP) 00:20:39.143 Address Family: 1 (IPv4) 00:20:39.143 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:39.143 Entry Flags: 00:20:39.143 Duplicate Returned Information: 1 00:20:39.143 Explicit Persistent Connection Support for Discovery: 1 00:20:39.143 Transport Requirements: 00:20:39.143 Secure Channel: Not Required 00:20:39.143 Port ID: 0 (0x0000) 00:20:39.143 Controller ID: 65535 (0xffff) 00:20:39.143 Admin Max SQ Size: 128 00:20:39.143 Transport Service Identifier: 4420 00:20:39.143 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:39.143 Transport Address: 10.0.0.2 00:20:39.143 Discovery Log Entry 1 00:20:39.143 ---------------------- 00:20:39.143 Transport Type: 3 (TCP) 00:20:39.143 Address Family: 1 (IPv4) 00:20:39.143 Subsystem Type: 2 (NVM Subsystem) 00:20:39.143 Entry Flags: 00:20:39.143 Duplicate Returned Information: 0 00:20:39.143 Explicit Persistent Connection Support for Discovery: 0 00:20:39.143 Transport Requirements: 00:20:39.143 Secure Channel: Not Required 00:20:39.143 Port ID: 0 (0x0000) 00:20:39.143 Controller ID: 65535 (0xffff) 00:20:39.143 Admin Max SQ Size: 128 00:20:39.143 Transport Service Identifier: 4420 00:20:39.143 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:39.143 Transport Address: 10.0.0.2 [2024-11-19 10:20:58.652113] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:39.143 [2024-11-19 10:20:58.652139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.143 [2024-11-19 10:20:58.652149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.143 [2024-11-19 10:20:58.652156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.143 [2024-11-19 10:20:58.652162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.143 [2024-11-19 10:20:58.652178] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652183] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652188] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.143 [2024-11-19 10:20:58.652201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.143 [2024-11-19 10:20:58.652234] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.143 [2024-11-19 10:20:58.652327] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.143 [2024-11-19 10:20:58.652336] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.143 [2024-11-19 10:20:58.652340] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652345] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.143 [2024-11-19 10:20:58.652354] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652359] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652363] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.143 [2024-11-19 10:20:58.652371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.143 [2024-11-19 10:20:58.652399] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.143 [2024-11-19 10:20:58.652482] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.143 [2024-11-19 10:20:58.652489] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.143 [2024-11-19 10:20:58.652494] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652498] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.143 [2024-11-19 10:20:58.652504] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:39.143 [2024-11-19 10:20:58.652510] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:39.143 [2024-11-19 10:20:58.652522] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652527] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652531] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.143 [2024-11-19 10:20:58.652539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.143 [2024-11-19 10:20:58.652560] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.143 [2024-11-19 10:20:58.652616] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.143 [2024-11-19 10:20:58.652624] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.143 [2024-11-19 10:20:58.652628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652632] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.143 [2024-11-19 10:20:58.652645] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652650] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652654] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.143 [2024-11-19 10:20:58.652662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.143 [2024-11-19 10:20:58.652683] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.143 [2024-11-19 10:20:58.652735] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.143 [2024-11-19 10:20:58.652742] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.143 [2024-11-19 10:20:58.652746] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652750] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.143 [2024-11-19 10:20:58.652762] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652767] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652771] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.143 [2024-11-19 10:20:58.652779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.143 [2024-11-19 10:20:58.652798] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.143 [2024-11-19 10:20:58.652870] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.143 [2024-11-19 10:20:58.652879] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.143 [2024-11-19 10:20:58.652883] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652888] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.143 [2024-11-19 10:20:58.652900] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652905] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.652909] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.143 [2024-11-19 10:20:58.652917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.143 [2024-11-19 10:20:58.652940] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.143 [2024-11-19 10:20:58.652992] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.143 [2024-11-19 10:20:58.653000] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.143 [2024-11-19 10:20:58.653004] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.653008] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.143 [2024-11-19 10:20:58.653019] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.653025] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.653029] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.143 [2024-11-19 10:20:58.653037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.143 [2024-11-19 10:20:58.653057] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.143 [2024-11-19 10:20:58.653127] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.143 [2024-11-19 10:20:58.653134] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.143 [2024-11-19 10:20:58.653139] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.653143] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.143 [2024-11-19 10:20:58.653154] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.653160] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.653164] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.143 [2024-11-19 10:20:58.653172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.143 [2024-11-19 10:20:58.653192] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.143 [2024-11-19 10:20:58.653248] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.143 [2024-11-19 10:20:58.653255] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.143 [2024-11-19 10:20:58.653259] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.653264] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.143 [2024-11-19 10:20:58.653275] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.653280] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.143 [2024-11-19 10:20:58.653285] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.143 [2024-11-19 10:20:58.653292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.144 [2024-11-19 10:20:58.653312] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.144 [2024-11-19 10:20:58.653370] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.144 [2024-11-19 10:20:58.653378] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.144 [2024-11-19 10:20:58.653382] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.653387] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.144 [2024-11-19 10:20:58.653398] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.653404] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.653408] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.144 [2024-11-19 10:20:58.653416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.144 [2024-11-19 10:20:58.653435] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.144 [2024-11-19 10:20:58.653486] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.144 [2024-11-19 10:20:58.653493] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.144 [2024-11-19 10:20:58.653497] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.653502] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.144 [2024-11-19 10:20:58.653513] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.653518] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.653523] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.144 [2024-11-19 10:20:58.653530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.144 [2024-11-19 10:20:58.653550] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.144 [2024-11-19 10:20:58.653602] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.144 [2024-11-19 10:20:58.653609] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.144 [2024-11-19 10:20:58.653613] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.653618] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.144 [2024-11-19 10:20:58.653629] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.653634] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.653638] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.144 [2024-11-19 10:20:58.653646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.144 [2024-11-19 10:20:58.653667] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.144 [2024-11-19 10:20:58.653734] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.144 [2024-11-19 10:20:58.653741] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.144 [2024-11-19 10:20:58.653745] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.653750] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.144 [2024-11-19 10:20:58.653761] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.653766] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.653771] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.144 [2024-11-19 10:20:58.653779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.144 [2024-11-19 10:20:58.653799] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.144 [2024-11-19 10:20:58.653878] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.144 [2024-11-19 10:20:58.653887] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.144 [2024-11-19 10:20:58.653892] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.653896] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.144 [2024-11-19 10:20:58.653908] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.653913] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.653918] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.144 [2024-11-19 10:20:58.653926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.144 [2024-11-19 10:20:58.653947] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.144 [2024-11-19 10:20:58.654003] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.144 [2024-11-19 10:20:58.654010] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.144 [2024-11-19 10:20:58.654014] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.654019] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.144 [2024-11-19 10:20:58.654030] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.654035] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.654040] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.144 [2024-11-19 10:20:58.654048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.144 [2024-11-19 10:20:58.654068] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.144 [2024-11-19 10:20:58.654132] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.144 [2024-11-19 10:20:58.654140] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.144 [2024-11-19 10:20:58.654144] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.654148] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.144 [2024-11-19 10:20:58.654160] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.654165] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.144 [2024-11-19 10:20:58.654169] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.144 [2024-11-19 10:20:58.654177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.144 [2024-11-19 10:20:58.654196] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.410 [2024-11-19 10:20:58.654256] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.410 [2024-11-19 10:20:58.654265] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.410 [2024-11-19 10:20:58.654269] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.410 [2024-11-19 10:20:58.654274] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.410 [2024-11-19 10:20:58.654285] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.410 [2024-11-19 10:20:58.654291] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.410 [2024-11-19 10:20:58.654295] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.410 [2024-11-19 10:20:58.654303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.410 [2024-11-19 10:20:58.654323] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.410 [2024-11-19 10:20:58.654376] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.410 [2024-11-19 10:20:58.654393] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.410 [2024-11-19 10:20:58.654398] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.410 [2024-11-19 10:20:58.654402] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.410 [2024-11-19 10:20:58.654415] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.410 [2024-11-19 10:20:58.654420] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.410 [2024-11-19 10:20:58.654424] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.410 [2024-11-19 10:20:58.654432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.410 [2024-11-19 10:20:58.654454] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.410 [2024-11-19 10:20:58.654518] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.410 [2024-11-19 10:20:58.654533] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.410 [2024-11-19 10:20:58.654537] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.410 [2024-11-19 10:20:58.654542] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.410 [2024-11-19 10:20:58.654554] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.410 [2024-11-19 10:20:58.654559] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.410 [2024-11-19 10:20:58.654564] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.410 [2024-11-19 10:20:58.654572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.410 [2024-11-19 10:20:58.654593] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.410 [2024-11-19 10:20:58.654651] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.410 [2024-11-19 10:20:58.654659] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.410 [2024-11-19 10:20:58.654663] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.410 [2024-11-19 10:20:58.654667] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.410 [2024-11-19 10:20:58.654679] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.410 [2024-11-19 10:20:58.654684] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.410 [2024-11-19 10:20:58.654688] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.410 [2024-11-19 10:20:58.654696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.410 [2024-11-19 10:20:58.654716] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.410 [2024-11-19 10:20:58.654793] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.410 [2024-11-19 10:20:58.654800] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.410 [2024-11-19 10:20:58.654805] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.410 [2024-11-19 10:20:58.654809] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.410 [2024-11-19 10:20:58.654838] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.410 [2024-11-19 10:20:58.654845] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.410 [2024-11-19 10:20:58.654849] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.410 [2024-11-19 10:20:58.654857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.410 [2024-11-19 10:20:58.654880] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.411 [2024-11-19 10:20:58.654944] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.411 [2024-11-19 10:20:58.654951] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.411 [2024-11-19 10:20:58.654955] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.654960] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.411 [2024-11-19 10:20:58.654991] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655000] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655007] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.411 [2024-11-19 10:20:58.655019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.411 [2024-11-19 10:20:58.655046] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.411 [2024-11-19 10:20:58.655104] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.411 [2024-11-19 10:20:58.655112] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.411 [2024-11-19 10:20:58.655116] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655120] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.411 [2024-11-19 10:20:58.655132] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655138] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655142] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.411 [2024-11-19 10:20:58.655150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.411 [2024-11-19 10:20:58.655170] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.411 [2024-11-19 10:20:58.655227] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.411 [2024-11-19 10:20:58.655234] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.411 [2024-11-19 10:20:58.655238] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655243] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.411 [2024-11-19 10:20:58.655265] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655271] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655275] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.411 [2024-11-19 10:20:58.655283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.411 [2024-11-19 10:20:58.655302] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.411 [2024-11-19 10:20:58.655354] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.411 [2024-11-19 10:20:58.655361] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.411 [2024-11-19 10:20:58.655365] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655369] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.411 [2024-11-19 10:20:58.655381] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655386] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655390] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.411 [2024-11-19 10:20:58.655398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.411 [2024-11-19 10:20:58.655417] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.411 [2024-11-19 10:20:58.655474] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.411 [2024-11-19 10:20:58.655481] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.411 [2024-11-19 10:20:58.655485] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655490] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.411 [2024-11-19 10:20:58.655501] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655506] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655510] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.411 [2024-11-19 10:20:58.655518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.411 [2024-11-19 10:20:58.655538] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.411 [2024-11-19 10:20:58.655591] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.411 [2024-11-19 10:20:58.655599] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.411 [2024-11-19 10:20:58.655603] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655607] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.411 [2024-11-19 10:20:58.655619] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655624] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655628] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.411 [2024-11-19 10:20:58.655636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.411 [2024-11-19 10:20:58.655655] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.411 [2024-11-19 10:20:58.655724] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.411 [2024-11-19 10:20:58.655732] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.411 [2024-11-19 10:20:58.655736] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655740] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.411 [2024-11-19 10:20:58.655751] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655757] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.655761] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.411 [2024-11-19 10:20:58.655769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.411 [2024-11-19 10:20:58.655788] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.411 [2024-11-19 10:20:58.659849] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.411 [2024-11-19 10:20:58.659876] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.411 [2024-11-19 10:20:58.659881] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.659886] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.411 [2024-11-19 10:20:58.659901] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.659907] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.659912] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x686540) 00:20:39.411 [2024-11-19 10:20:58.659922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.411 [2024-11-19 10:20:58.659951] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6bf640, cid 3, qid 0 00:20:39.411 [2024-11-19 10:20:58.660022] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.411 [2024-11-19 10:20:58.660029] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.411 [2024-11-19 10:20:58.660033] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.660038] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x6bf640) on tqpair=0x686540 00:20:39.411 [2024-11-19 10:20:58.660047] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:39.411 00:20:39.411 10:20:58 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:39.411 [2024-11-19 10:20:58.695290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:39.411 [2024-11-19 10:20:58.695348] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93031 ] 00:20:39.411 [2024-11-19 10:20:58.835046] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:39.411 [2024-11-19 10:20:58.835122] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:39.411 [2024-11-19 10:20:58.835131] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:39.411 [2024-11-19 10:20:58.835145] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:39.411 [2024-11-19 10:20:58.835156] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:39.411 [2024-11-19 10:20:58.835317] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:39.411 [2024-11-19 10:20:58.835374] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x856540 0 00:20:39.411 [2024-11-19 10:20:58.842847] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:39.411 [2024-11-19 10:20:58.842874] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:39.411 [2024-11-19 10:20:58.842882] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:39.411 [2024-11-19 10:20:58.842886] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:39.411 [2024-11-19 10:20:58.842933] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.842942] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.842947] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x856540) 00:20:39.411 [2024-11-19 10:20:58.842963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:39.411 [2024-11-19 10:20:58.843016] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f220, cid 0, qid 0 00:20:39.411 [2024-11-19 10:20:58.850840] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.411 [2024-11-19 10:20:58.850863] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.411 [2024-11-19 10:20:58.850869] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.411 [2024-11-19 10:20:58.850875] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f220) on tqpair=0x856540 00:20:39.411 [2024-11-19 10:20:58.850892] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:39.411 [2024-11-19 10:20:58.850901] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:39.412 [2024-11-19 10:20:58.850908] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:39.412 [2024-11-19 10:20:58.850926] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.850932] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.850937] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x856540) 00:20:39.412 [2024-11-19 10:20:58.850947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.412 [2024-11-19 10:20:58.850993] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f220, cid 0, qid 0 00:20:39.412 [2024-11-19 10:20:58.851069] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.412 [2024-11-19 10:20:58.851078] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.412 [2024-11-19 10:20:58.851083] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851087] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f220) on tqpair=0x856540 00:20:39.412 [2024-11-19 10:20:58.851094] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:39.412 [2024-11-19 10:20:58.851103] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:39.412 [2024-11-19 10:20:58.851122] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851127] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851131] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x856540) 00:20:39.412 [2024-11-19 10:20:58.851140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.412 [2024-11-19 10:20:58.851163] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f220, cid 0, qid 0 00:20:39.412 [2024-11-19 10:20:58.851221] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.412 [2024-11-19 10:20:58.851229] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.412 [2024-11-19 10:20:58.851233] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851238] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f220) on tqpair=0x856540 00:20:39.412 [2024-11-19 10:20:58.851244] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:39.412 [2024-11-19 10:20:58.851254] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:39.412 [2024-11-19 10:20:58.851263] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851268] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851272] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x856540) 00:20:39.412 [2024-11-19 10:20:58.851281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.412 [2024-11-19 10:20:58.851300] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f220, cid 0, qid 0 00:20:39.412 [2024-11-19 10:20:58.851353] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.412 [2024-11-19 10:20:58.851361] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.412 [2024-11-19 10:20:58.851366] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851370] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f220) on tqpair=0x856540 00:20:39.412 [2024-11-19 10:20:58.851377] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:39.412 [2024-11-19 10:20:58.851388] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851393] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851398] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x856540) 00:20:39.412 [2024-11-19 10:20:58.851406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.412 [2024-11-19 10:20:58.851424] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f220, cid 0, qid 0 00:20:39.412 [2024-11-19 10:20:58.851484] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.412 [2024-11-19 10:20:58.851502] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.412 [2024-11-19 10:20:58.851507] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851512] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f220) on tqpair=0x856540 00:20:39.412 [2024-11-19 10:20:58.851517] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:39.412 [2024-11-19 10:20:58.851524] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:39.412 [2024-11-19 10:20:58.851533] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:39.412 [2024-11-19 10:20:58.851640] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:39.412 [2024-11-19 10:20:58.851649] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:39.412 [2024-11-19 10:20:58.851659] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851664] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851678] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x856540) 00:20:39.412 [2024-11-19 10:20:58.851686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.412 [2024-11-19 10:20:58.851708] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f220, cid 0, qid 0 00:20:39.412 [2024-11-19 10:20:58.851765] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.412 [2024-11-19 10:20:58.851773] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.412 [2024-11-19 10:20:58.851777] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851782] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f220) on tqpair=0x856540 00:20:39.412 [2024-11-19 10:20:58.851788] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:39.412 [2024-11-19 10:20:58.851799] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851804] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851809] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x856540) 00:20:39.412 [2024-11-19 10:20:58.851817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.412 [2024-11-19 10:20:58.851853] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f220, cid 0, qid 0 00:20:39.412 [2024-11-19 10:20:58.851915] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.412 [2024-11-19 10:20:58.851923] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.412 [2024-11-19 10:20:58.851927] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851932] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f220) on tqpair=0x856540 00:20:39.412 [2024-11-19 10:20:58.851937] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:39.412 [2024-11-19 10:20:58.851943] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:39.412 [2024-11-19 10:20:58.851953] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:39.412 [2024-11-19 10:20:58.851971] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:39.412 [2024-11-19 10:20:58.851983] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851988] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.851992] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x856540) 00:20:39.412 [2024-11-19 10:20:58.852001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.412 [2024-11-19 10:20:58.852022] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f220, cid 0, qid 0 00:20:39.412 [2024-11-19 10:20:58.852126] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.412 [2024-11-19 10:20:58.852143] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.412 [2024-11-19 10:20:58.852149] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.852154] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x856540): datao=0, datal=4096, cccid=0 00:20:39.412 [2024-11-19 10:20:58.852159] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x88f220) on tqpair(0x856540): expected_datao=0, payload_size=4096 00:20:39.412 [2024-11-19 10:20:58.852170] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.852176] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.852186] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.412 [2024-11-19 10:20:58.852193] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.412 [2024-11-19 10:20:58.852197] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.852202] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f220) on tqpair=0x856540 00:20:39.412 [2024-11-19 10:20:58.852212] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:39.412 [2024-11-19 10:20:58.852218] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:39.412 [2024-11-19 10:20:58.852223] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:39.412 [2024-11-19 10:20:58.852229] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:39.412 [2024-11-19 10:20:58.852235] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:39.412 [2024-11-19 10:20:58.852241] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:39.412 [2024-11-19 10:20:58.852256] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:39.412 [2024-11-19 10:20:58.852265] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.852271] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.412 [2024-11-19 10:20:58.852275] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x856540) 00:20:39.413 [2024-11-19 10:20:58.852284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:39.413 [2024-11-19 10:20:58.852308] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f220, cid 0, qid 0 00:20:39.413 [2024-11-19 10:20:58.852373] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.413 [2024-11-19 10:20:58.852380] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.413 [2024-11-19 10:20:58.852385] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852389] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f220) on tqpair=0x856540 00:20:39.413 [2024-11-19 10:20:58.852398] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852402] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852407] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x856540) 00:20:39.413 [2024-11-19 10:20:58.852414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.413 [2024-11-19 10:20:58.852421] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852426] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852430] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x856540) 00:20:39.413 [2024-11-19 10:20:58.852437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.413 [2024-11-19 10:20:58.852444] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852448] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852453] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x856540) 00:20:39.413 [2024-11-19 10:20:58.852459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.413 [2024-11-19 10:20:58.852466] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852470] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852475] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.413 [2024-11-19 10:20:58.852481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.413 [2024-11-19 10:20:58.852487] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:39.413 [2024-11-19 10:20:58.852502] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:39.413 [2024-11-19 10:20:58.852511] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852515] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852519] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x856540) 00:20:39.413 [2024-11-19 10:20:58.852528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.413 [2024-11-19 10:20:58.852550] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f220, cid 0, qid 0 00:20:39.413 [2024-11-19 10:20:58.852557] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f380, cid 1, qid 0 00:20:39.413 [2024-11-19 10:20:58.852563] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f4e0, cid 2, qid 0 00:20:39.413 [2024-11-19 10:20:58.852568] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.413 [2024-11-19 10:20:58.852573] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f7a0, cid 4, qid 0 00:20:39.413 [2024-11-19 10:20:58.852671] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.413 [2024-11-19 10:20:58.852687] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.413 [2024-11-19 10:20:58.852693] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852697] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f7a0) on tqpair=0x856540 00:20:39.413 [2024-11-19 10:20:58.852704] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:39.413 [2024-11-19 10:20:58.852711] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:39.413 [2024-11-19 10:20:58.852721] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:39.413 [2024-11-19 10:20:58.852733] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:39.413 [2024-11-19 10:20:58.852742] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852747] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852752] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x856540) 00:20:39.413 [2024-11-19 10:20:58.852760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:39.413 [2024-11-19 10:20:58.852782] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f7a0, cid 4, qid 0 00:20:39.413 [2024-11-19 10:20:58.852862] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.413 [2024-11-19 10:20:58.852872] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.413 [2024-11-19 10:20:58.852876] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852881] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f7a0) on tqpair=0x856540 00:20:39.413 [2024-11-19 10:20:58.852946] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:39.413 [2024-11-19 10:20:58.852966] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:39.413 [2024-11-19 10:20:58.852977] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852982] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.852986] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x856540) 00:20:39.413 [2024-11-19 10:20:58.852995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.413 [2024-11-19 10:20:58.853018] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f7a0, cid 4, qid 0 00:20:39.413 [2024-11-19 10:20:58.853091] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.413 [2024-11-19 10:20:58.853098] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.413 [2024-11-19 10:20:58.853103] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.853108] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x856540): datao=0, datal=4096, cccid=4 00:20:39.413 [2024-11-19 10:20:58.853113] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x88f7a0) on tqpair(0x856540): expected_datao=0, payload_size=4096 00:20:39.413 [2024-11-19 10:20:58.853123] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.853127] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.853137] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.413 [2024-11-19 10:20:58.853143] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.413 [2024-11-19 10:20:58.853148] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.853152] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f7a0) on tqpair=0x856540 00:20:39.413 [2024-11-19 10:20:58.853170] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:39.413 [2024-11-19 10:20:58.853181] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:39.413 [2024-11-19 10:20:58.853193] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:39.413 [2024-11-19 10:20:58.853202] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.853207] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.853211] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x856540) 00:20:39.413 [2024-11-19 10:20:58.853219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.413 [2024-11-19 10:20:58.853241] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f7a0, cid 4, qid 0 00:20:39.413 [2024-11-19 10:20:58.853322] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.413 [2024-11-19 10:20:58.853330] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.413 [2024-11-19 10:20:58.853335] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.853340] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x856540): datao=0, datal=4096, cccid=4 00:20:39.413 [2024-11-19 10:20:58.853345] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x88f7a0) on tqpair(0x856540): expected_datao=0, payload_size=4096 00:20:39.413 [2024-11-19 10:20:58.853364] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.853368] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.853378] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.413 [2024-11-19 10:20:58.853384] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.413 [2024-11-19 10:20:58.853389] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.853393] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f7a0) on tqpair=0x856540 00:20:39.413 [2024-11-19 10:20:58.853410] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:39.413 [2024-11-19 10:20:58.853422] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:39.413 [2024-11-19 10:20:58.853431] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.853436] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.853441] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x856540) 00:20:39.413 [2024-11-19 10:20:58.853449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.413 [2024-11-19 10:20:58.853470] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f7a0, cid 4, qid 0 00:20:39.413 [2024-11-19 10:20:58.853539] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.413 [2024-11-19 10:20:58.853562] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.413 [2024-11-19 10:20:58.853567] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.413 [2024-11-19 10:20:58.853572] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x856540): datao=0, datal=4096, cccid=4 00:20:39.413 [2024-11-19 10:20:58.853577] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x88f7a0) on tqpair(0x856540): expected_datao=0, payload_size=4096 00:20:39.414 [2024-11-19 10:20:58.853586] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.853591] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.853600] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.414 [2024-11-19 10:20:58.853607] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.414 [2024-11-19 10:20:58.853611] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.853616] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f7a0) on tqpair=0x856540 00:20:39.414 [2024-11-19 10:20:58.853626] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:39.414 [2024-11-19 10:20:58.853636] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:39.414 [2024-11-19 10:20:58.853648] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:39.414 [2024-11-19 10:20:58.853656] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:39.414 [2024-11-19 10:20:58.853662] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:39.414 [2024-11-19 10:20:58.853667] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:39.414 [2024-11-19 10:20:58.853673] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:39.414 [2024-11-19 10:20:58.853679] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:39.414 [2024-11-19 10:20:58.853713] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.853725] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.853729] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x856540) 00:20:39.414 [2024-11-19 10:20:58.853738] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.414 [2024-11-19 10:20:58.853747] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.853752] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.853756] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x856540) 00:20:39.414 [2024-11-19 10:20:58.853763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.414 [2024-11-19 10:20:58.853798] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f7a0, cid 4, qid 0 00:20:39.414 [2024-11-19 10:20:58.853807] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f900, cid 5, qid 0 00:20:39.414 [2024-11-19 10:20:58.853922] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.414 [2024-11-19 10:20:58.853932] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.414 [2024-11-19 10:20:58.853937] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.853941] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f7a0) on tqpair=0x856540 00:20:39.414 [2024-11-19 10:20:58.853949] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.414 [2024-11-19 10:20:58.853956] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.414 [2024-11-19 10:20:58.853960] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.853964] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f900) on tqpair=0x856540 00:20:39.414 [2024-11-19 10:20:58.853976] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.853981] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.853986] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x856540) 00:20:39.414 [2024-11-19 10:20:58.853994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.414 [2024-11-19 10:20:58.854016] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f900, cid 5, qid 0 00:20:39.414 [2024-11-19 10:20:58.854074] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.414 [2024-11-19 10:20:58.854082] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.414 [2024-11-19 10:20:58.854086] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854091] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f900) on tqpair=0x856540 00:20:39.414 [2024-11-19 10:20:58.854102] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854108] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854112] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x856540) 00:20:39.414 [2024-11-19 10:20:58.854120] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.414 [2024-11-19 10:20:58.854138] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f900, cid 5, qid 0 00:20:39.414 [2024-11-19 10:20:58.854205] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.414 [2024-11-19 10:20:58.854212] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.414 [2024-11-19 10:20:58.854216] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854221] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f900) on tqpair=0x856540 00:20:39.414 [2024-11-19 10:20:58.854232] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854238] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854242] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x856540) 00:20:39.414 [2024-11-19 10:20:58.854250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.414 [2024-11-19 10:20:58.854268] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f900, cid 5, qid 0 00:20:39.414 [2024-11-19 10:20:58.854328] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.414 [2024-11-19 10:20:58.854336] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.414 [2024-11-19 10:20:58.854340] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854345] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f900) on tqpair=0x856540 00:20:39.414 [2024-11-19 10:20:58.854360] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854366] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854370] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x856540) 00:20:39.414 [2024-11-19 10:20:58.854378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.414 [2024-11-19 10:20:58.854386] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854391] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854395] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x856540) 00:20:39.414 [2024-11-19 10:20:58.854402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.414 [2024-11-19 10:20:58.854410] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854415] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854419] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x856540) 00:20:39.414 [2024-11-19 10:20:58.854427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.414 [2024-11-19 10:20:58.854436] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854441] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854445] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x856540) 00:20:39.414 [2024-11-19 10:20:58.854452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.414 [2024-11-19 10:20:58.854473] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f900, cid 5, qid 0 00:20:39.414 [2024-11-19 10:20:58.854480] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f7a0, cid 4, qid 0 00:20:39.414 [2024-11-19 10:20:58.854486] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88fa60, cid 6, qid 0 00:20:39.414 [2024-11-19 10:20:58.854491] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88fbc0, cid 7, qid 0 00:20:39.414 [2024-11-19 10:20:58.854636] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.414 [2024-11-19 10:20:58.854650] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.414 [2024-11-19 10:20:58.854655] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854659] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x856540): datao=0, datal=8192, cccid=5 00:20:39.414 [2024-11-19 10:20:58.854665] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x88f900) on tqpair(0x856540): expected_datao=0, payload_size=8192 00:20:39.414 [2024-11-19 10:20:58.854685] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854691] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854697] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.414 [2024-11-19 10:20:58.854704] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.414 [2024-11-19 10:20:58.854708] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854712] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x856540): datao=0, datal=512, cccid=4 00:20:39.414 [2024-11-19 10:20:58.854717] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x88f7a0) on tqpair(0x856540): expected_datao=0, payload_size=512 00:20:39.414 [2024-11-19 10:20:58.854725] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854730] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854736] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.414 [2024-11-19 10:20:58.854742] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.414 [2024-11-19 10:20:58.854746] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854751] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x856540): datao=0, datal=512, cccid=6 00:20:39.414 [2024-11-19 10:20:58.854756] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x88fa60) on tqpair(0x856540): expected_datao=0, payload_size=512 00:20:39.414 [2024-11-19 10:20:58.854764] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854768] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.414 [2024-11-19 10:20:58.854774] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:39.414 [2024-11-19 10:20:58.854781] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:39.415 [2024-11-19 10:20:58.854785] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:39.415 [2024-11-19 10:20:58.854789] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x856540): datao=0, datal=4096, cccid=7 00:20:39.415 [2024-11-19 10:20:58.854794] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x88fbc0) on tqpair(0x856540): expected_datao=0, payload_size=4096 00:20:39.415 [2024-11-19 10:20:58.854802] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:39.415 [2024-11-19 10:20:58.854807] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:39.415 [2024-11-19 10:20:58.854813] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.415 [2024-11-19 10:20:58.858833] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.415 [2024-11-19 10:20:58.858851] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.415 [2024-11-19 10:20:58.858857] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f900) on tqpair=0x856540 00:20:39.415 [2024-11-19 10:20:58.858878] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.415 [2024-11-19 10:20:58.858887] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.415 [2024-11-19 10:20:58.858891] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.415 [2024-11-19 10:20:58.858896] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f7a0) on tqpair=0x856540 00:20:39.415 [2024-11-19 10:20:58.858907] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.415 ===================================================== 00:20:39.415 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:39.415 ===================================================== 00:20:39.415 Controller Capabilities/Features 00:20:39.415 ================================ 00:20:39.415 Vendor ID: 8086 00:20:39.415 Subsystem Vendor ID: 8086 00:20:39.415 Serial Number: SPDK00000000000001 00:20:39.415 Model Number: SPDK bdev Controller 00:20:39.415 Firmware Version: 24.01.1 00:20:39.415 Recommended Arb Burst: 6 00:20:39.415 IEEE OUI Identifier: e4 d2 5c 00:20:39.415 Multi-path I/O 00:20:39.415 May have multiple subsystem ports: Yes 00:20:39.415 May have multiple controllers: Yes 00:20:39.415 Associated with SR-IOV VF: No 00:20:39.415 Max Data Transfer Size: 131072 00:20:39.415 Max Number of Namespaces: 32 00:20:39.415 Max Number of I/O Queues: 127 00:20:39.415 NVMe Specification Version (VS): 1.3 00:20:39.415 NVMe Specification Version (Identify): 1.3 00:20:39.415 Maximum Queue Entries: 128 00:20:39.415 Contiguous Queues Required: Yes 00:20:39.415 Arbitration Mechanisms Supported 00:20:39.415 Weighted Round Robin: Not Supported 00:20:39.415 Vendor Specific: Not Supported 00:20:39.415 Reset Timeout: 15000 ms 00:20:39.415 Doorbell Stride: 4 bytes 00:20:39.415 NVM Subsystem Reset: Not Supported 00:20:39.415 Command Sets Supported 00:20:39.415 NVM Command Set: Supported 00:20:39.415 Boot Partition: Not Supported 00:20:39.415 Memory Page Size Minimum: 4096 bytes 00:20:39.415 Memory Page Size Maximum: 4096 bytes 00:20:39.415 Persistent Memory Region: Not Supported 00:20:39.415 Optional Asynchronous Events Supported 00:20:39.415 Namespace Attribute Notices: Supported 00:20:39.415 Firmware Activation Notices: Not Supported 00:20:39.415 ANA Change Notices: Not Supported 00:20:39.415 PLE Aggregate Log Change Notices: Not Supported 00:20:39.415 LBA Status Info Alert Notices: Not Supported 00:20:39.415 EGE Aggregate Log Change Notices: Not Supported 00:20:39.415 Normal NVM Subsystem Shutdown event: Not Supported 00:20:39.415 Zone Descriptor Change Notices: Not Supported 00:20:39.415 Discovery Log Change Notices: Not Supported 00:20:39.415 Controller Attributes 00:20:39.415 128-bit Host Identifier: Supported 00:20:39.415 Non-Operational Permissive Mode: Not Supported 00:20:39.415 NVM Sets: Not Supported 00:20:39.415 Read Recovery Levels: Not Supported 00:20:39.415 Endurance Groups: Not Supported 00:20:39.415 Predictable Latency Mode: Not Supported 00:20:39.415 Traffic Based Keep ALive: Not Supported 00:20:39.415 Namespace Granularity: Not Supported 00:20:39.415 SQ Associations: Not Supported 00:20:39.415 UUID List: Not Supported 00:20:39.415 Multi-Domain Subsystem: Not Supported 00:20:39.415 Fixed Capacity Management: Not Supported 00:20:39.415 Variable Capacity Management: Not Supported 00:20:39.415 Delete Endurance Group: Not Supported 00:20:39.415 Delete NVM Set: Not Supported 00:20:39.415 Extended LBA Formats Supported: Not Supported 00:20:39.415 Flexible Data Placement Supported: Not Supported 00:20:39.415 00:20:39.415 Controller Memory Buffer Support 00:20:39.415 ================================ 00:20:39.415 Supported: No 00:20:39.415 00:20:39.415 Persistent Memory Region Support 00:20:39.415 ================================ 00:20:39.415 Supported: No 00:20:39.415 00:20:39.415 Admin Command Set Attributes 00:20:39.415 ============================ 00:20:39.415 Security Send/Receive: Not Supported 00:20:39.415 Format NVM: Not Supported 00:20:39.415 Firmware Activate/Download: Not Supported 00:20:39.415 Namespace Management: Not Supported 00:20:39.415 Device Self-Test: Not Supported 00:20:39.415 Directives: Not Supported 00:20:39.415 NVMe-MI: Not Supported 00:20:39.415 Virtualization Management: Not Supported 00:20:39.415 Doorbell Buffer Config: Not Supported 00:20:39.415 Get LBA Status Capability: Not Supported 00:20:39.415 Command & Feature Lockdown Capability: Not Supported 00:20:39.415 Abort Command Limit: 4 00:20:39.415 Async Event Request Limit: 4 00:20:39.415 Number of Firmware Slots: N/A 00:20:39.415 Firmware Slot 1 Read-Only: N/A 00:20:39.415 Firmware Activation Without Reset: [2024-11-19 10:20:58.858914] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.415 [2024-11-19 10:20:58.858919] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.415 [2024-11-19 10:20:58.858923] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88fa60) on tqpair=0x856540 00:20:39.415 [2024-11-19 10:20:58.858931] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.415 [2024-11-19 10:20:58.858938] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.415 [2024-11-19 10:20:58.858942] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.415 [2024-11-19 10:20:58.858947] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88fbc0) on tqpair=0x856540 00:20:39.415 N/A 00:20:39.415 Multiple Update Detection Support: N/A 00:20:39.415 Firmware Update Granularity: No Information Provided 00:20:39.415 Per-Namespace SMART Log: No 00:20:39.415 Asymmetric Namespace Access Log Page: Not Supported 00:20:39.415 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:39.415 Command Effects Log Page: Supported 00:20:39.415 Get Log Page Extended Data: Supported 00:20:39.415 Telemetry Log Pages: Not Supported 00:20:39.415 Persistent Event Log Pages: Not Supported 00:20:39.415 Supported Log Pages Log Page: May Support 00:20:39.415 Commands Supported & Effects Log Page: Not Supported 00:20:39.415 Feature Identifiers & Effects Log Page:May Support 00:20:39.415 NVMe-MI Commands & Effects Log Page: May Support 00:20:39.415 Data Area 4 for Telemetry Log: Not Supported 00:20:39.415 Error Log Page Entries Supported: 128 00:20:39.415 Keep Alive: Supported 00:20:39.415 Keep Alive Granularity: 10000 ms 00:20:39.415 00:20:39.415 NVM Command Set Attributes 00:20:39.415 ========================== 00:20:39.415 Submission Queue Entry Size 00:20:39.415 Max: 64 00:20:39.415 Min: 64 00:20:39.416 Completion Queue Entry Size 00:20:39.416 Max: 16 00:20:39.416 Min: 16 00:20:39.416 Number of Namespaces: 32 00:20:39.416 Compare Command: Supported 00:20:39.416 Write Uncorrectable Command: Not Supported 00:20:39.416 Dataset Management Command: Supported 00:20:39.416 Write Zeroes Command: Supported 00:20:39.416 Set Features Save Field: Not Supported 00:20:39.416 Reservations: Supported 00:20:39.416 Timestamp: Not Supported 00:20:39.416 Copy: Supported 00:20:39.416 Volatile Write Cache: Present 00:20:39.416 Atomic Write Unit (Normal): 1 00:20:39.416 Atomic Write Unit (PFail): 1 00:20:39.416 Atomic Compare & Write Unit: 1 00:20:39.416 Fused Compare & Write: Supported 00:20:39.416 Scatter-Gather List 00:20:39.416 SGL Command Set: Supported 00:20:39.416 SGL Keyed: Supported 00:20:39.416 SGL Bit Bucket Descriptor: Not Supported 00:20:39.416 SGL Metadata Pointer: Not Supported 00:20:39.416 Oversized SGL: Not Supported 00:20:39.416 SGL Metadata Address: Not Supported 00:20:39.416 SGL Offset: Supported 00:20:39.416 Transport SGL Data Block: Not Supported 00:20:39.416 Replay Protected Memory Block: Not Supported 00:20:39.416 00:20:39.416 Firmware Slot Information 00:20:39.416 ========================= 00:20:39.416 Active slot: 1 00:20:39.416 Slot 1 Firmware Revision: 24.01.1 00:20:39.416 00:20:39.416 00:20:39.416 Commands Supported and Effects 00:20:39.416 ============================== 00:20:39.416 Admin Commands 00:20:39.416 -------------- 00:20:39.416 Get Log Page (02h): Supported 00:20:39.416 Identify (06h): Supported 00:20:39.416 Abort (08h): Supported 00:20:39.416 Set Features (09h): Supported 00:20:39.416 Get Features (0Ah): Supported 00:20:39.416 Asynchronous Event Request (0Ch): Supported 00:20:39.416 Keep Alive (18h): Supported 00:20:39.416 I/O Commands 00:20:39.416 ------------ 00:20:39.416 Flush (00h): Supported LBA-Change 00:20:39.416 Write (01h): Supported LBA-Change 00:20:39.416 Read (02h): Supported 00:20:39.416 Compare (05h): Supported 00:20:39.416 Write Zeroes (08h): Supported LBA-Change 00:20:39.416 Dataset Management (09h): Supported LBA-Change 00:20:39.416 Copy (19h): Supported LBA-Change 00:20:39.416 Unknown (79h): Supported LBA-Change 00:20:39.416 Unknown (7Ah): Supported 00:20:39.416 00:20:39.416 Error Log 00:20:39.416 ========= 00:20:39.416 00:20:39.416 Arbitration 00:20:39.416 =========== 00:20:39.416 Arbitration Burst: 1 00:20:39.416 00:20:39.416 Power Management 00:20:39.416 ================ 00:20:39.416 Number of Power States: 1 00:20:39.416 Current Power State: Power State #0 00:20:39.416 Power State #0: 00:20:39.416 Max Power: 0.00 W 00:20:39.416 Non-Operational State: Operational 00:20:39.416 Entry Latency: Not Reported 00:20:39.416 Exit Latency: Not Reported 00:20:39.416 Relative Read Throughput: 0 00:20:39.416 Relative Read Latency: 0 00:20:39.416 Relative Write Throughput: 0 00:20:39.416 Relative Write Latency: 0 00:20:39.416 Idle Power: Not Reported 00:20:39.416 Active Power: Not Reported 00:20:39.416 Non-Operational Permissive Mode: Not Supported 00:20:39.416 00:20:39.416 Health Information 00:20:39.416 ================== 00:20:39.416 Critical Warnings: 00:20:39.416 Available Spare Space: OK 00:20:39.416 Temperature: OK 00:20:39.416 Device Reliability: OK 00:20:39.416 Read Only: No 00:20:39.416 Volatile Memory Backup: OK 00:20:39.416 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:39.416 Temperature Threshold: [2024-11-19 10:20:58.859108] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.416 [2024-11-19 10:20:58.859118] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.416 [2024-11-19 10:20:58.859123] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x856540) 00:20:39.416 [2024-11-19 10:20:58.859133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.416 [2024-11-19 10:20:58.859165] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88fbc0, cid 7, qid 0 00:20:39.416 [2024-11-19 10:20:58.859243] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.416 [2024-11-19 10:20:58.859251] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.416 [2024-11-19 10:20:58.859256] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.416 [2024-11-19 10:20:58.859260] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88fbc0) on tqpair=0x856540 00:20:39.416 [2024-11-19 10:20:58.859300] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:39.416 [2024-11-19 10:20:58.859316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.416 [2024-11-19 10:20:58.859324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.416 [2024-11-19 10:20:58.859331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.416 [2024-11-19 10:20:58.859338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.416 [2024-11-19 10:20:58.859348] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.416 [2024-11-19 10:20:58.859353] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.416 [2024-11-19 10:20:58.859358] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.416 [2024-11-19 10:20:58.859367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.416 [2024-11-19 10:20:58.859392] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.416 [2024-11-19 10:20:58.859453] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.416 [2024-11-19 10:20:58.859461] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.416 [2024-11-19 10:20:58.859465] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.416 [2024-11-19 10:20:58.859470] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.416 [2024-11-19 10:20:58.859478] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.416 [2024-11-19 10:20:58.859483] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.416 [2024-11-19 10:20:58.859487] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.416 [2024-11-19 10:20:58.859495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.416 [2024-11-19 10:20:58.859518] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.416 [2024-11-19 10:20:58.859596] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.416 [2024-11-19 10:20:58.859603] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.416 [2024-11-19 10:20:58.859607] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.416 [2024-11-19 10:20:58.859612] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.416 [2024-11-19 10:20:58.859618] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:39.416 [2024-11-19 10:20:58.859623] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:39.416 [2024-11-19 10:20:58.859634] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.416 [2024-11-19 10:20:58.859639] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.416 [2024-11-19 10:20:58.859644] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.416 [2024-11-19 10:20:58.859652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.416 [2024-11-19 10:20:58.859670] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.416 [2024-11-19 10:20:58.859728] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.416 [2024-11-19 10:20:58.859736] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.416 [2024-11-19 10:20:58.859740] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.416 [2024-11-19 10:20:58.859745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.416 [2024-11-19 10:20:58.859757] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.416 [2024-11-19 10:20:58.859762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.416 [2024-11-19 10:20:58.859766] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.417 [2024-11-19 10:20:58.859774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.417 [2024-11-19 10:20:58.859792] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.417 [2024-11-19 10:20:58.859861] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.417 [2024-11-19 10:20:58.859871] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.417 [2024-11-19 10:20:58.859875] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.859880] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.417 [2024-11-19 10:20:58.859892] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.859897] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.859902] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.417 [2024-11-19 10:20:58.859910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.417 [2024-11-19 10:20:58.859931] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.417 [2024-11-19 10:20:58.859982] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.417 [2024-11-19 10:20:58.859990] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.417 [2024-11-19 10:20:58.859994] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.859999] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.417 [2024-11-19 10:20:58.860010] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860015] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860020] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.417 [2024-11-19 10:20:58.860028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.417 [2024-11-19 10:20:58.860046] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.417 [2024-11-19 10:20:58.860102] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.417 [2024-11-19 10:20:58.860110] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.417 [2024-11-19 10:20:58.860114] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860118] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.417 [2024-11-19 10:20:58.860130] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860135] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860140] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.417 [2024-11-19 10:20:58.860148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.417 [2024-11-19 10:20:58.860166] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.417 [2024-11-19 10:20:58.860219] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.417 [2024-11-19 10:20:58.860226] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.417 [2024-11-19 10:20:58.860231] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860235] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.417 [2024-11-19 10:20:58.860247] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860252] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860256] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.417 [2024-11-19 10:20:58.860264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.417 [2024-11-19 10:20:58.860282] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.417 [2024-11-19 10:20:58.860334] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.417 [2024-11-19 10:20:58.860351] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.417 [2024-11-19 10:20:58.860357] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860362] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.417 [2024-11-19 10:20:58.860374] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860380] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860384] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.417 [2024-11-19 10:20:58.860393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.417 [2024-11-19 10:20:58.860413] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.417 [2024-11-19 10:20:58.860467] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.417 [2024-11-19 10:20:58.860482] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.417 [2024-11-19 10:20:58.860487] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860492] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.417 [2024-11-19 10:20:58.860505] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860510] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860514] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.417 [2024-11-19 10:20:58.860523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.417 [2024-11-19 10:20:58.860543] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.417 [2024-11-19 10:20:58.860596] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.417 [2024-11-19 10:20:58.860603] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.417 [2024-11-19 10:20:58.860607] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860612] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.417 [2024-11-19 10:20:58.860623] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860629] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860633] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.417 [2024-11-19 10:20:58.860641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.417 [2024-11-19 10:20:58.860665] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.417 [2024-11-19 10:20:58.860718] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.417 [2024-11-19 10:20:58.860725] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.417 [2024-11-19 10:20:58.860729] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860734] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.417 [2024-11-19 10:20:58.860746] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860751] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860755] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.417 [2024-11-19 10:20:58.860763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.417 [2024-11-19 10:20:58.860781] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.417 [2024-11-19 10:20:58.860851] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.417 [2024-11-19 10:20:58.860860] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.417 [2024-11-19 10:20:58.860864] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860869] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.417 [2024-11-19 10:20:58.860881] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860886] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860891] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.417 [2024-11-19 10:20:58.860899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.417 [2024-11-19 10:20:58.860920] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.417 [2024-11-19 10:20:58.860977] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.417 [2024-11-19 10:20:58.860984] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.417 [2024-11-19 10:20:58.860989] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.860993] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.417 [2024-11-19 10:20:58.861005] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.861010] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.861014] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.417 [2024-11-19 10:20:58.861022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.417 [2024-11-19 10:20:58.861040] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.417 [2024-11-19 10:20:58.861097] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.417 [2024-11-19 10:20:58.861109] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.417 [2024-11-19 10:20:58.861114] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.861119] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.417 [2024-11-19 10:20:58.861131] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.861136] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.861140] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.417 [2024-11-19 10:20:58.861148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.417 [2024-11-19 10:20:58.861168] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.417 [2024-11-19 10:20:58.861227] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.417 [2024-11-19 10:20:58.861239] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.417 [2024-11-19 10:20:58.861244] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.861248] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.417 [2024-11-19 10:20:58.861260] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.417 [2024-11-19 10:20:58.861266] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.861270] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.418 [2024-11-19 10:20:58.861278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.418 [2024-11-19 10:20:58.861297] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.418 [2024-11-19 10:20:58.861356] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.418 [2024-11-19 10:20:58.861367] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.418 [2024-11-19 10:20:58.861372] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.861377] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.418 [2024-11-19 10:20:58.861389] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.861394] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.861398] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.418 [2024-11-19 10:20:58.861407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.418 [2024-11-19 10:20:58.861426] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.418 [2024-11-19 10:20:58.861478] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.418 [2024-11-19 10:20:58.861486] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.418 [2024-11-19 10:20:58.861490] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.861495] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.418 [2024-11-19 10:20:58.861506] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.861512] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.861516] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.418 [2024-11-19 10:20:58.861524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.418 [2024-11-19 10:20:58.861542] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.418 [2024-11-19 10:20:58.861603] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.418 [2024-11-19 10:20:58.861611] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.418 [2024-11-19 10:20:58.861615] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.861619] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.418 [2024-11-19 10:20:58.861631] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.861636] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.861640] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.418 [2024-11-19 10:20:58.861648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.418 [2024-11-19 10:20:58.861666] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.418 [2024-11-19 10:20:58.861722] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.418 [2024-11-19 10:20:58.861729] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.418 [2024-11-19 10:20:58.861733] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.861738] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.418 [2024-11-19 10:20:58.861749] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.861754] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.861759] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.418 [2024-11-19 10:20:58.861766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.418 [2024-11-19 10:20:58.861785] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.418 [2024-11-19 10:20:58.861860] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.418 [2024-11-19 10:20:58.861869] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.418 [2024-11-19 10:20:58.861874] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.861878] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.418 [2024-11-19 10:20:58.861890] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.861895] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.861899] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.418 [2024-11-19 10:20:58.861908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.418 [2024-11-19 10:20:58.861928] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.418 [2024-11-19 10:20:58.861988] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.418 [2024-11-19 10:20:58.861995] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.418 [2024-11-19 10:20:58.861999] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862004] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.418 [2024-11-19 10:20:58.862015] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862020] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862025] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.418 [2024-11-19 10:20:58.862032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.418 [2024-11-19 10:20:58.862050] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.418 [2024-11-19 10:20:58.862106] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.418 [2024-11-19 10:20:58.862118] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.418 [2024-11-19 10:20:58.862123] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862128] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.418 [2024-11-19 10:20:58.862140] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862145] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862150] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.418 [2024-11-19 10:20:58.862157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.418 [2024-11-19 10:20:58.862177] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.418 [2024-11-19 10:20:58.862233] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.418 [2024-11-19 10:20:58.862248] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.418 [2024-11-19 10:20:58.862253] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862258] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.418 [2024-11-19 10:20:58.862270] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862276] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862280] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.418 [2024-11-19 10:20:58.862288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.418 [2024-11-19 10:20:58.862309] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.418 [2024-11-19 10:20:58.862360] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.418 [2024-11-19 10:20:58.862368] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.418 [2024-11-19 10:20:58.862372] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862376] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.418 [2024-11-19 10:20:58.862388] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862393] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862397] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.418 [2024-11-19 10:20:58.862405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.418 [2024-11-19 10:20:58.862424] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.418 [2024-11-19 10:20:58.862479] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.418 [2024-11-19 10:20:58.862487] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.418 [2024-11-19 10:20:58.862491] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862496] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.418 [2024-11-19 10:20:58.862507] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862512] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862516] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.418 [2024-11-19 10:20:58.862524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.418 [2024-11-19 10:20:58.862542] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.418 [2024-11-19 10:20:58.862595] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.418 [2024-11-19 10:20:58.862602] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.418 [2024-11-19 10:20:58.862606] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862611] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.418 [2024-11-19 10:20:58.862622] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862627] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.418 [2024-11-19 10:20:58.862632] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.418 [2024-11-19 10:20:58.862639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.418 [2024-11-19 10:20:58.862657] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.418 [2024-11-19 10:20:58.862716] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.419 [2024-11-19 10:20:58.862723] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.419 [2024-11-19 10:20:58.862727] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.419 [2024-11-19 10:20:58.862732] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.419 [2024-11-19 10:20:58.862743] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.419 [2024-11-19 10:20:58.862749] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.419 [2024-11-19 10:20:58.862753] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.419 [2024-11-19 10:20:58.862761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.419 [2024-11-19 10:20:58.862779] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.419 [2024-11-19 10:20:58.866840] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.419 [2024-11-19 10:20:58.866862] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.419 [2024-11-19 10:20:58.866868] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.419 [2024-11-19 10:20:58.866873] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.419 [2024-11-19 10:20:58.866888] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:39.419 [2024-11-19 10:20:58.866894] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:39.419 [2024-11-19 10:20:58.866899] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x856540) 00:20:39.419 [2024-11-19 10:20:58.866908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.419 [2024-11-19 10:20:58.866937] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x88f640, cid 3, qid 0 00:20:39.419 [2024-11-19 10:20:58.867007] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:39.419 [2024-11-19 10:20:58.867020] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:39.419 [2024-11-19 10:20:58.867026] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:39.419 [2024-11-19 10:20:58.867034] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x88f640) on tqpair=0x856540 00:20:39.419 [2024-11-19 10:20:58.867047] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:20:39.419 0 Kelvin (-273 Celsius) 00:20:39.419 Available Spare: 0% 00:20:39.419 Available Spare Threshold: 0% 00:20:39.419 Life Percentage Used: 0% 00:20:39.419 Data Units Read: 0 00:20:39.419 Data Units Written: 0 00:20:39.419 Host Read Commands: 0 00:20:39.419 Host Write Commands: 0 00:20:39.419 Controller Busy Time: 0 minutes 00:20:39.419 Power Cycles: 0 00:20:39.419 Power On Hours: 0 hours 00:20:39.419 Unsafe Shutdowns: 0 00:20:39.419 Unrecoverable Media Errors: 0 00:20:39.419 Lifetime Error Log Entries: 0 00:20:39.419 Warning Temperature Time: 0 minutes 00:20:39.419 Critical Temperature Time: 0 minutes 00:20:39.419 00:20:39.419 Number of Queues 00:20:39.419 ================ 00:20:39.419 Number of I/O Submission Queues: 127 00:20:39.419 Number of I/O Completion Queues: 127 00:20:39.419 00:20:39.419 Active Namespaces 00:20:39.419 ================= 00:20:39.419 Namespace ID:1 00:20:39.419 Error Recovery Timeout: Unlimited 00:20:39.419 Command Set Identifier: NVM (00h) 00:20:39.419 Deallocate: Supported 00:20:39.419 Deallocated/Unwritten Error: Not Supported 00:20:39.419 Deallocated Read Value: Unknown 00:20:39.419 Deallocate in Write Zeroes: Not Supported 00:20:39.419 Deallocated Guard Field: 0xFFFF 00:20:39.419 Flush: Supported 00:20:39.419 Reservation: Supported 00:20:39.419 Namespace Sharing Capabilities: Multiple Controllers 00:20:39.419 Size (in LBAs): 131072 (0GiB) 00:20:39.419 Capacity (in LBAs): 131072 (0GiB) 00:20:39.419 Utilization (in LBAs): 131072 (0GiB) 00:20:39.419 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:39.419 EUI64: ABCDEF0123456789 00:20:39.419 UUID: e5008960-e1d3-4505-afc6-e4096459b47a 00:20:39.419 Thin Provisioning: Not Supported 00:20:39.419 Per-NS Atomic Units: Yes 00:20:39.419 Atomic Boundary Size (Normal): 0 00:20:39.419 Atomic Boundary Size (PFail): 0 00:20:39.419 Atomic Boundary Offset: 0 00:20:39.419 Maximum Single Source Range Length: 65535 00:20:39.419 Maximum Copy Length: 65535 00:20:39.419 Maximum Source Range Count: 1 00:20:39.419 NGUID/EUI64 Never Reused: No 00:20:39.419 Namespace Write Protected: No 00:20:39.419 Number of LBA Formats: 1 00:20:39.419 Current LBA Format: LBA Format #00 00:20:39.419 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:39.419 00:20:39.419 10:20:58 -- host/identify.sh@51 -- # sync 00:20:39.419 10:20:58 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.419 10:20:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.419 10:20:58 -- common/autotest_common.sh@10 -- # set +x 00:20:39.419 10:20:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.419 10:20:58 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:39.419 10:20:58 -- host/identify.sh@56 -- # nvmftestfini 00:20:39.419 10:20:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:39.419 10:20:58 -- nvmf/common.sh@116 -- # sync 00:20:39.419 10:20:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:39.419 10:20:58 -- nvmf/common.sh@119 -- # set +e 00:20:39.419 10:20:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:39.419 10:20:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:39.419 rmmod nvme_tcp 00:20:39.678 rmmod nvme_fabrics 00:20:39.678 rmmod nvme_keyring 00:20:39.678 10:20:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:39.678 10:20:58 -- nvmf/common.sh@123 -- # set -e 00:20:39.678 10:20:58 -- nvmf/common.sh@124 -- # return 0 00:20:39.678 10:20:58 -- nvmf/common.sh@477 -- # '[' -n 92984 ']' 00:20:39.678 10:20:58 -- nvmf/common.sh@478 -- # killprocess 92984 00:20:39.678 10:20:58 -- common/autotest_common.sh@936 -- # '[' -z 92984 ']' 00:20:39.678 10:20:58 -- common/autotest_common.sh@940 -- # kill -0 92984 00:20:39.678 10:20:58 -- common/autotest_common.sh@941 -- # uname 00:20:39.678 10:20:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:39.678 10:20:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92984 00:20:39.678 10:20:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:39.678 10:20:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:39.678 10:20:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92984' 00:20:39.678 killing process with pid 92984 00:20:39.678 10:20:59 -- common/autotest_common.sh@955 -- # kill 92984 00:20:39.678 [2024-11-19 10:20:59.028206] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:39.678 10:20:59 -- common/autotest_common.sh@960 -- # wait 92984 00:20:39.678 10:20:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:39.678 10:20:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:39.678 10:20:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:39.678 10:20:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.678 10:20:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:39.678 10:20:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.678 10:20:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.678 10:20:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.678 10:20:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:39.678 00:20:39.678 real 0m1.847s 00:20:39.678 user 0m4.212s 00:20:39.678 sys 0m0.549s 00:20:39.678 10:20:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:39.678 10:20:59 -- common/autotest_common.sh@10 -- # set +x 00:20:39.678 ************************************ 00:20:39.678 END TEST nvmf_identify 00:20:39.678 ************************************ 00:20:39.935 10:20:59 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:39.935 10:20:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:39.935 10:20:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:39.935 10:20:59 -- common/autotest_common.sh@10 -- # set +x 00:20:39.935 ************************************ 00:20:39.935 START TEST nvmf_perf 00:20:39.935 ************************************ 00:20:39.935 10:20:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:39.935 * Looking for test storage... 00:20:39.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:39.935 10:20:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:39.936 10:20:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:39.936 10:20:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:39.936 10:20:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:39.936 10:20:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:39.936 10:20:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:39.936 10:20:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:39.936 10:20:59 -- scripts/common.sh@335 -- # IFS=.-: 00:20:39.936 10:20:59 -- scripts/common.sh@335 -- # read -ra ver1 00:20:39.936 10:20:59 -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.936 10:20:59 -- scripts/common.sh@336 -- # read -ra ver2 00:20:39.936 10:20:59 -- scripts/common.sh@337 -- # local 'op=<' 00:20:39.936 10:20:59 -- scripts/common.sh@339 -- # ver1_l=2 00:20:39.936 10:20:59 -- scripts/common.sh@340 -- # ver2_l=1 00:20:39.936 10:20:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:39.936 10:20:59 -- scripts/common.sh@343 -- # case "$op" in 00:20:39.936 10:20:59 -- scripts/common.sh@344 -- # : 1 00:20:39.936 10:20:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:39.936 10:20:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.936 10:20:59 -- scripts/common.sh@364 -- # decimal 1 00:20:39.936 10:20:59 -- scripts/common.sh@352 -- # local d=1 00:20:39.936 10:20:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.936 10:20:59 -- scripts/common.sh@354 -- # echo 1 00:20:39.936 10:20:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:39.936 10:20:59 -- scripts/common.sh@365 -- # decimal 2 00:20:39.936 10:20:59 -- scripts/common.sh@352 -- # local d=2 00:20:39.936 10:20:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.936 10:20:59 -- scripts/common.sh@354 -- # echo 2 00:20:39.936 10:20:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:39.936 10:20:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:39.936 10:20:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:39.936 10:20:59 -- scripts/common.sh@367 -- # return 0 00:20:39.936 10:20:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.936 10:20:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:39.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.936 --rc genhtml_branch_coverage=1 00:20:39.936 --rc genhtml_function_coverage=1 00:20:39.936 --rc genhtml_legend=1 00:20:39.936 --rc geninfo_all_blocks=1 00:20:39.936 --rc geninfo_unexecuted_blocks=1 00:20:39.936 00:20:39.936 ' 00:20:39.936 10:20:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:39.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.936 --rc genhtml_branch_coverage=1 00:20:39.936 --rc genhtml_function_coverage=1 00:20:39.936 --rc genhtml_legend=1 00:20:39.936 --rc geninfo_all_blocks=1 00:20:39.936 --rc geninfo_unexecuted_blocks=1 00:20:39.936 00:20:39.936 ' 00:20:39.936 10:20:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:39.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.936 --rc genhtml_branch_coverage=1 00:20:39.936 --rc genhtml_function_coverage=1 00:20:39.936 --rc genhtml_legend=1 00:20:39.936 --rc geninfo_all_blocks=1 00:20:39.936 --rc geninfo_unexecuted_blocks=1 00:20:39.936 00:20:39.936 ' 00:20:39.936 10:20:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:39.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.936 --rc genhtml_branch_coverage=1 00:20:39.936 --rc genhtml_function_coverage=1 00:20:39.936 --rc genhtml_legend=1 00:20:39.936 --rc geninfo_all_blocks=1 00:20:39.936 --rc geninfo_unexecuted_blocks=1 00:20:39.936 00:20:39.936 ' 00:20:39.936 10:20:59 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:39.936 10:20:59 -- nvmf/common.sh@7 -- # uname -s 00:20:39.936 10:20:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.936 10:20:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.936 10:20:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.936 10:20:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.936 10:20:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.936 10:20:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.936 10:20:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.936 10:20:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.936 10:20:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.936 10:20:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.936 10:20:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:20:39.936 10:20:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:20:39.936 10:20:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.936 10:20:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.936 10:20:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:39.936 10:20:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:39.936 10:20:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.936 10:20:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.936 10:20:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.936 10:20:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.936 10:20:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.936 10:20:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.936 10:20:59 -- paths/export.sh@5 -- # export PATH 00:20:39.936 10:20:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.936 10:20:59 -- nvmf/common.sh@46 -- # : 0 00:20:39.936 10:20:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:39.936 10:20:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:39.936 10:20:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:39.936 10:20:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.936 10:20:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.936 10:20:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:39.936 10:20:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:39.936 10:20:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:39.936 10:20:59 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:39.936 10:20:59 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:39.936 10:20:59 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:39.936 10:20:59 -- host/perf.sh@17 -- # nvmftestinit 00:20:39.936 10:20:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:39.936 10:20:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.936 10:20:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:39.936 10:20:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:39.936 10:20:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:39.936 10:20:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.936 10:20:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.936 10:20:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.936 10:20:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:39.936 10:20:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:39.936 10:20:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:39.936 10:20:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:39.936 10:20:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:39.936 10:20:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:39.936 10:20:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.936 10:20:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.936 10:20:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:39.936 10:20:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:39.936 10:20:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:39.936 10:20:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:39.936 10:20:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:39.936 10:20:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.936 10:20:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:39.936 10:20:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:39.936 10:20:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:39.936 10:20:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:39.936 10:20:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:40.195 10:20:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:40.195 Cannot find device "nvmf_tgt_br" 00:20:40.195 10:20:59 -- nvmf/common.sh@154 -- # true 00:20:40.195 10:20:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:40.195 Cannot find device "nvmf_tgt_br2" 00:20:40.195 10:20:59 -- nvmf/common.sh@155 -- # true 00:20:40.195 10:20:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:40.195 10:20:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:40.195 Cannot find device "nvmf_tgt_br" 00:20:40.195 10:20:59 -- nvmf/common.sh@157 -- # true 00:20:40.195 10:20:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:40.195 Cannot find device "nvmf_tgt_br2" 00:20:40.195 10:20:59 -- nvmf/common.sh@158 -- # true 00:20:40.195 10:20:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:40.195 10:20:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:40.195 10:20:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.195 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.195 10:20:59 -- nvmf/common.sh@161 -- # true 00:20:40.195 10:20:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.195 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.195 10:20:59 -- nvmf/common.sh@162 -- # true 00:20:40.195 10:20:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:40.195 10:20:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:40.195 10:20:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:40.195 10:20:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:40.195 10:20:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:40.195 10:20:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:40.195 10:20:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:40.195 10:20:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:40.195 10:20:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:40.195 10:20:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:40.195 10:20:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:40.454 10:20:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:40.454 10:20:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:40.454 10:20:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:40.454 10:20:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:40.454 10:20:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:40.454 10:20:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:40.454 10:20:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:40.454 10:20:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:40.454 10:20:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:40.454 10:20:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:40.454 10:20:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:40.454 10:20:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:40.454 10:20:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:40.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:20:40.454 00:20:40.454 --- 10.0.0.2 ping statistics --- 00:20:40.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.454 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:40.454 10:20:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:40.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:40.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:20:40.454 00:20:40.454 --- 10.0.0.3 ping statistics --- 00:20:40.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.454 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:40.454 10:20:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:40.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:40.454 00:20:40.454 --- 10.0.0.1 ping statistics --- 00:20:40.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.454 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:40.454 10:20:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.454 10:20:59 -- nvmf/common.sh@421 -- # return 0 00:20:40.454 10:20:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:40.454 10:20:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.454 10:20:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:40.454 10:20:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:40.454 10:20:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.454 10:20:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:40.454 10:20:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:40.454 10:20:59 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:40.454 10:20:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:40.454 10:20:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.454 10:20:59 -- common/autotest_common.sh@10 -- # set +x 00:20:40.454 10:20:59 -- nvmf/common.sh@469 -- # nvmfpid=93205 00:20:40.454 10:20:59 -- nvmf/common.sh@470 -- # waitforlisten 93205 00:20:40.454 10:20:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:40.454 10:20:59 -- common/autotest_common.sh@829 -- # '[' -z 93205 ']' 00:20:40.454 10:20:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.454 10:20:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.454 10:20:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.454 10:20:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.454 10:20:59 -- common/autotest_common.sh@10 -- # set +x 00:20:40.454 [2024-11-19 10:20:59.910842] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:40.455 [2024-11-19 10:20:59.910932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.714 [2024-11-19 10:21:00.043191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.714 [2024-11-19 10:21:00.078784] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:40.714 [2024-11-19 10:21:00.078934] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.714 [2024-11-19 10:21:00.078947] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.714 [2024-11-19 10:21:00.078956] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.714 [2024-11-19 10:21:00.079724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.714 [2024-11-19 10:21:00.079802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.714 [2024-11-19 10:21:00.079875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.714 [2024-11-19 10:21:00.079864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.714 10:21:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.714 10:21:00 -- common/autotest_common.sh@862 -- # return 0 00:20:40.714 10:21:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:40.714 10:21:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:40.714 10:21:00 -- common/autotest_common.sh@10 -- # set +x 00:20:40.714 10:21:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.714 10:21:00 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:40.714 10:21:00 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:41.280 10:21:00 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:41.280 10:21:00 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:41.538 10:21:00 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:41.538 10:21:00 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:41.797 10:21:01 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:41.797 10:21:01 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:41.797 10:21:01 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:41.797 10:21:01 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:41.797 10:21:01 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:42.055 [2024-11-19 10:21:01.538091] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.055 10:21:01 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:42.313 10:21:01 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:42.313 10:21:01 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:42.572 10:21:02 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:42.572 10:21:02 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:43.137 10:21:02 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:43.395 [2024-11-19 10:21:02.743744] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.395 10:21:02 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:43.654 10:21:03 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:43.654 10:21:03 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:43.654 10:21:03 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:43.654 10:21:03 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:44.638 Initializing NVMe Controllers 00:20:44.638 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:44.638 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:44.638 Initialization complete. Launching workers. 00:20:44.639 ======================================================== 00:20:44.639 Latency(us) 00:20:44.639 Device Information : IOPS MiB/s Average min max 00:20:44.639 PCIE (0000:00:06.0) NSID 1 from core 0: 25824.00 100.88 1238.57 292.32 5303.18 00:20:44.639 ======================================================== 00:20:44.639 Total : 25824.00 100.88 1238.57 292.32 5303.18 00:20:44.639 00:20:44.639 10:21:04 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:46.011 Initializing NVMe Controllers 00:20:46.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:46.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:46.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:46.011 Initialization complete. Launching workers. 00:20:46.011 ======================================================== 00:20:46.011 Latency(us) 00:20:46.011 Device Information : IOPS MiB/s Average min max 00:20:46.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3593.59 14.04 277.93 112.63 6175.15 00:20:46.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.50 0.48 8160.63 6031.65 12024.48 00:20:46.011 ======================================================== 00:20:46.011 Total : 3717.09 14.52 539.84 112.63 12024.48 00:20:46.011 00:20:46.011 10:21:05 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:47.385 Initializing NVMe Controllers 00:20:47.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:47.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:47.385 Initialization complete. Launching workers. 00:20:47.385 ======================================================== 00:20:47.385 Latency(us) 00:20:47.385 Device Information : IOPS MiB/s Average min max 00:20:47.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8738.59 34.14 3662.18 600.65 8065.72 00:20:47.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2667.05 10.42 12075.09 5918.43 20243.52 00:20:47.385 ======================================================== 00:20:47.385 Total : 11405.64 44.55 5629.42 600.65 20243.52 00:20:47.385 00:20:47.385 10:21:06 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:47.385 10:21:06 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:49.914 [2024-11-19 10:21:09.329875] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16575d0 is same with the state(5) to be set 00:20:49.914 Initializing NVMe Controllers 00:20:49.914 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:49.914 Controller IO queue size 128, less than required. 00:20:49.914 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:49.914 Controller IO queue size 128, less than required. 00:20:49.914 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:49.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:49.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:49.914 Initialization complete. Launching workers. 00:20:49.914 ======================================================== 00:20:49.914 Latency(us) 00:20:49.914 Device Information : IOPS MiB/s Average min max 00:20:49.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1296.97 324.24 103812.53 64691.26 203597.52 00:20:49.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 573.77 143.44 229657.97 74775.15 402489.66 00:20:49.914 ======================================================== 00:20:49.914 Total : 1870.74 467.69 142410.06 64691.26 402489.66 00:20:49.914 00:20:49.914 10:21:09 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:50.172 No valid NVMe controllers or AIO or URING devices found 00:20:50.172 Initializing NVMe Controllers 00:20:50.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.172 Controller IO queue size 128, less than required. 00:20:50.172 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.172 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:50.172 Controller IO queue size 128, less than required. 00:20:50.172 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.172 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:50.172 WARNING: Some requested NVMe devices were skipped 00:20:50.172 10:21:09 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:52.714 Initializing NVMe Controllers 00:20:52.714 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.714 Controller IO queue size 128, less than required. 00:20:52.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:52.714 Controller IO queue size 128, less than required. 00:20:52.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:52.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:52.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:52.714 Initialization complete. Launching workers. 00:20:52.714 00:20:52.714 ==================== 00:20:52.714 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:52.714 TCP transport: 00:20:52.714 polls: 7176 00:20:52.714 idle_polls: 4007 00:20:52.714 sock_completions: 3169 00:20:52.714 nvme_completions: 5875 00:20:52.714 submitted_requests: 8943 00:20:52.714 queued_requests: 1 00:20:52.714 00:20:52.714 ==================== 00:20:52.714 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:52.714 TCP transport: 00:20:52.714 polls: 9797 00:20:52.714 idle_polls: 6808 00:20:52.714 sock_completions: 2989 00:20:52.714 nvme_completions: 5610 00:20:52.714 submitted_requests: 8476 00:20:52.714 queued_requests: 1 00:20:52.714 ======================================================== 00:20:52.714 Latency(us) 00:20:52.714 Device Information : IOPS MiB/s Average min max 00:20:52.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1531.30 382.83 84916.31 58364.26 121519.10 00:20:52.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1464.83 366.21 88875.42 33793.59 160555.37 00:20:52.714 ======================================================== 00:20:52.714 Total : 2996.13 749.03 86851.95 33793.59 160555.37 00:20:52.714 00:20:52.714 10:21:12 -- host/perf.sh@66 -- # sync 00:20:52.714 10:21:12 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:52.973 10:21:12 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:52.973 10:21:12 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:52.973 10:21:12 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:53.540 10:21:12 -- host/perf.sh@72 -- # ls_guid=6293a46c-f2d5-4d52-b4c6-ad8a01c5caf4 00:20:53.540 10:21:12 -- host/perf.sh@73 -- # get_lvs_free_mb 6293a46c-f2d5-4d52-b4c6-ad8a01c5caf4 00:20:53.540 10:21:12 -- common/autotest_common.sh@1353 -- # local lvs_uuid=6293a46c-f2d5-4d52-b4c6-ad8a01c5caf4 00:20:53.540 10:21:12 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:53.540 10:21:12 -- common/autotest_common.sh@1355 -- # local fc 00:20:53.540 10:21:12 -- common/autotest_common.sh@1356 -- # local cs 00:20:53.540 10:21:12 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:53.540 10:21:13 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:53.540 { 00:20:53.540 "base_bdev": "Nvme0n1", 00:20:53.540 "block_size": 4096, 00:20:53.540 "cluster_size": 4194304, 00:20:53.540 "free_clusters": 1278, 00:20:53.540 "name": "lvs_0", 00:20:53.540 "total_data_clusters": 1278, 00:20:53.540 "uuid": "6293a46c-f2d5-4d52-b4c6-ad8a01c5caf4" 00:20:53.540 } 00:20:53.540 ]' 00:20:53.540 10:21:13 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="6293a46c-f2d5-4d52-b4c6-ad8a01c5caf4") .free_clusters' 00:20:53.798 10:21:13 -- common/autotest_common.sh@1358 -- # fc=1278 00:20:53.798 10:21:13 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="6293a46c-f2d5-4d52-b4c6-ad8a01c5caf4") .cluster_size' 00:20:53.798 10:21:13 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:53.798 10:21:13 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:20:53.798 5112 00:20:53.798 10:21:13 -- common/autotest_common.sh@1363 -- # echo 5112 00:20:53.798 10:21:13 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:53.798 10:21:13 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6293a46c-f2d5-4d52-b4c6-ad8a01c5caf4 lbd_0 5112 00:20:54.057 10:21:13 -- host/perf.sh@80 -- # lb_guid=723538d2-698a-4a88-b1ab-935f6b09d6a5 00:20:54.057 10:21:13 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 723538d2-698a-4a88-b1ab-935f6b09d6a5 lvs_n_0 00:20:54.316 10:21:13 -- host/perf.sh@83 -- # ls_nested_guid=0074ef8a-8f2d-4512-8a8b-a94c5672c43c 00:20:54.316 10:21:13 -- host/perf.sh@84 -- # get_lvs_free_mb 0074ef8a-8f2d-4512-8a8b-a94c5672c43c 00:20:54.316 10:21:13 -- common/autotest_common.sh@1353 -- # local lvs_uuid=0074ef8a-8f2d-4512-8a8b-a94c5672c43c 00:20:54.316 10:21:13 -- common/autotest_common.sh@1354 -- # local lvs_info 00:20:54.316 10:21:13 -- common/autotest_common.sh@1355 -- # local fc 00:20:54.316 10:21:13 -- common/autotest_common.sh@1356 -- # local cs 00:20:54.316 10:21:13 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:54.574 10:21:14 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:20:54.574 { 00:20:54.574 "base_bdev": "Nvme0n1", 00:20:54.574 "block_size": 4096, 00:20:54.574 "cluster_size": 4194304, 00:20:54.574 "free_clusters": 0, 00:20:54.574 "name": "lvs_0", 00:20:54.574 "total_data_clusters": 1278, 00:20:54.574 "uuid": "6293a46c-f2d5-4d52-b4c6-ad8a01c5caf4" 00:20:54.574 }, 00:20:54.574 { 00:20:54.574 "base_bdev": "723538d2-698a-4a88-b1ab-935f6b09d6a5", 00:20:54.574 "block_size": 4096, 00:20:54.574 "cluster_size": 4194304, 00:20:54.574 "free_clusters": 1276, 00:20:54.574 "name": "lvs_n_0", 00:20:54.574 "total_data_clusters": 1276, 00:20:54.574 "uuid": "0074ef8a-8f2d-4512-8a8b-a94c5672c43c" 00:20:54.574 } 00:20:54.574 ]' 00:20:54.574 10:21:14 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="0074ef8a-8f2d-4512-8a8b-a94c5672c43c") .free_clusters' 00:20:54.832 10:21:14 -- common/autotest_common.sh@1358 -- # fc=1276 00:20:54.832 10:21:14 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="0074ef8a-8f2d-4512-8a8b-a94c5672c43c") .cluster_size' 00:20:54.832 10:21:14 -- common/autotest_common.sh@1359 -- # cs=4194304 00:20:54.832 10:21:14 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:20:54.832 5104 00:20:54.832 10:21:14 -- common/autotest_common.sh@1363 -- # echo 5104 00:20:54.832 10:21:14 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:54.832 10:21:14 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0074ef8a-8f2d-4512-8a8b-a94c5672c43c lbd_nest_0 5104 00:20:55.090 10:21:14 -- host/perf.sh@88 -- # lb_nested_guid=80262853-1f1c-4873-a74c-714a0c9514e9 00:20:55.090 10:21:14 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:55.347 10:21:14 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:55.347 10:21:14 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 80262853-1f1c-4873-a74c-714a0c9514e9 00:20:55.606 10:21:15 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:55.864 10:21:15 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:55.864 10:21:15 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:55.864 10:21:15 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:55.864 10:21:15 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:55.864 10:21:15 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:56.122 No valid NVMe controllers or AIO or URING devices found 00:20:56.122 Initializing NVMe Controllers 00:20:56.122 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:56.122 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:56.122 WARNING: Some requested NVMe devices were skipped 00:20:56.122 10:21:15 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:56.122 10:21:15 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:08.379 Initializing NVMe Controllers 00:21:08.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:08.379 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:08.379 Initialization complete. Launching workers. 00:21:08.379 ======================================================== 00:21:08.379 Latency(us) 00:21:08.379 Device Information : IOPS MiB/s Average min max 00:21:08.379 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1020.09 127.51 979.42 330.69 6433.26 00:21:08.379 ======================================================== 00:21:08.379 Total : 1020.09 127.51 979.42 330.69 6433.26 00:21:08.379 00:21:08.379 10:21:25 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:08.379 10:21:25 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:08.379 10:21:25 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:08.379 No valid NVMe controllers or AIO or URING devices found 00:21:08.379 Initializing NVMe Controllers 00:21:08.379 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:08.379 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:08.379 WARNING: Some requested NVMe devices were skipped 00:21:08.379 10:21:26 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:08.379 10:21:26 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:18.349 [2024-11-19 10:21:36.442880] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16fc0a0 is same with the state(5) to be set 00:21:18.349 [2024-11-19 10:21:36.442951] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16fc0a0 is same with the state(5) to be set 00:21:18.349 Initializing NVMe Controllers 00:21:18.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.349 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:18.349 Initialization complete. Launching workers. 00:21:18.349 ======================================================== 00:21:18.349 Latency(us) 00:21:18.349 Device Information : IOPS MiB/s Average min max 00:21:18.349 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1151.70 143.96 27801.78 8013.32 83622.81 00:21:18.349 ======================================================== 00:21:18.349 Total : 1151.70 143.96 27801.78 8013.32 83622.81 00:21:18.349 00:21:18.349 10:21:36 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:18.349 10:21:36 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:18.349 10:21:36 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:18.349 No valid NVMe controllers or AIO or URING devices found 00:21:18.349 Initializing NVMe Controllers 00:21:18.349 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.349 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:18.349 WARNING: Some requested NVMe devices were skipped 00:21:18.349 10:21:36 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:18.349 10:21:36 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:28.320 Initializing NVMe Controllers 00:21:28.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:28.320 Controller IO queue size 128, less than required. 00:21:28.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:28.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:28.320 Initialization complete. Launching workers. 00:21:28.320 ======================================================== 00:21:28.320 Latency(us) 00:21:28.320 Device Information : IOPS MiB/s Average min max 00:21:28.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3691.38 461.42 34758.53 9044.25 98419.08 00:21:28.320 ======================================================== 00:21:28.320 Total : 3691.38 461.42 34758.53 9044.25 98419.08 00:21:28.320 00:21:28.320 10:21:47 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:28.320 10:21:47 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 80262853-1f1c-4873-a74c-714a0c9514e9 00:21:28.578 10:21:47 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:28.836 10:21:48 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 723538d2-698a-4a88-b1ab-935f6b09d6a5 00:21:29.095 10:21:48 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:29.661 10:21:48 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:29.662 10:21:48 -- host/perf.sh@114 -- # nvmftestfini 00:21:29.662 10:21:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:29.662 10:21:48 -- nvmf/common.sh@116 -- # sync 00:21:29.662 10:21:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:29.662 10:21:48 -- nvmf/common.sh@119 -- # set +e 00:21:29.662 10:21:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:29.662 10:21:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:29.662 rmmod nvme_tcp 00:21:29.662 rmmod nvme_fabrics 00:21:29.662 rmmod nvme_keyring 00:21:29.662 10:21:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:29.662 10:21:48 -- nvmf/common.sh@123 -- # set -e 00:21:29.662 10:21:48 -- nvmf/common.sh@124 -- # return 0 00:21:29.662 10:21:48 -- nvmf/common.sh@477 -- # '[' -n 93205 ']' 00:21:29.662 10:21:48 -- nvmf/common.sh@478 -- # killprocess 93205 00:21:29.662 10:21:48 -- common/autotest_common.sh@936 -- # '[' -z 93205 ']' 00:21:29.662 10:21:48 -- common/autotest_common.sh@940 -- # kill -0 93205 00:21:29.662 10:21:48 -- common/autotest_common.sh@941 -- # uname 00:21:29.662 10:21:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:29.662 10:21:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93205 00:21:29.662 10:21:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:29.662 killing process with pid 93205 00:21:29.662 10:21:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:29.662 10:21:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93205' 00:21:29.662 10:21:49 -- common/autotest_common.sh@955 -- # kill 93205 00:21:29.662 10:21:49 -- common/autotest_common.sh@960 -- # wait 93205 00:21:31.085 10:21:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:31.085 10:21:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:31.085 10:21:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:31.085 10:21:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:31.085 10:21:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:31.085 10:21:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.085 10:21:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.085 10:21:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.085 10:21:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:31.085 00:21:31.085 real 0m51.156s 00:21:31.085 user 3m12.854s 00:21:31.085 sys 0m10.755s 00:21:31.085 10:21:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:31.085 10:21:50 -- common/autotest_common.sh@10 -- # set +x 00:21:31.085 ************************************ 00:21:31.085 END TEST nvmf_perf 00:21:31.085 ************************************ 00:21:31.085 10:21:50 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:31.085 10:21:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:31.085 10:21:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:31.085 10:21:50 -- common/autotest_common.sh@10 -- # set +x 00:21:31.085 ************************************ 00:21:31.085 START TEST nvmf_fio_host 00:21:31.085 ************************************ 00:21:31.085 10:21:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:31.085 * Looking for test storage... 00:21:31.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:31.085 10:21:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:31.085 10:21:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:31.085 10:21:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:31.345 10:21:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:31.345 10:21:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:31.345 10:21:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:31.345 10:21:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:31.345 10:21:50 -- scripts/common.sh@335 -- # IFS=.-: 00:21:31.345 10:21:50 -- scripts/common.sh@335 -- # read -ra ver1 00:21:31.345 10:21:50 -- scripts/common.sh@336 -- # IFS=.-: 00:21:31.345 10:21:50 -- scripts/common.sh@336 -- # read -ra ver2 00:21:31.345 10:21:50 -- scripts/common.sh@337 -- # local 'op=<' 00:21:31.345 10:21:50 -- scripts/common.sh@339 -- # ver1_l=2 00:21:31.345 10:21:50 -- scripts/common.sh@340 -- # ver2_l=1 00:21:31.345 10:21:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:31.345 10:21:50 -- scripts/common.sh@343 -- # case "$op" in 00:21:31.345 10:21:50 -- scripts/common.sh@344 -- # : 1 00:21:31.345 10:21:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:31.345 10:21:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:31.345 10:21:50 -- scripts/common.sh@364 -- # decimal 1 00:21:31.345 10:21:50 -- scripts/common.sh@352 -- # local d=1 00:21:31.345 10:21:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:31.345 10:21:50 -- scripts/common.sh@354 -- # echo 1 00:21:31.345 10:21:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:31.345 10:21:50 -- scripts/common.sh@365 -- # decimal 2 00:21:31.345 10:21:50 -- scripts/common.sh@352 -- # local d=2 00:21:31.345 10:21:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:31.345 10:21:50 -- scripts/common.sh@354 -- # echo 2 00:21:31.345 10:21:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:31.345 10:21:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:31.345 10:21:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:31.345 10:21:50 -- scripts/common.sh@367 -- # return 0 00:21:31.345 10:21:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:31.345 10:21:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:31.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.345 --rc genhtml_branch_coverage=1 00:21:31.345 --rc genhtml_function_coverage=1 00:21:31.345 --rc genhtml_legend=1 00:21:31.345 --rc geninfo_all_blocks=1 00:21:31.345 --rc geninfo_unexecuted_blocks=1 00:21:31.345 00:21:31.345 ' 00:21:31.345 10:21:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:31.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.345 --rc genhtml_branch_coverage=1 00:21:31.345 --rc genhtml_function_coverage=1 00:21:31.345 --rc genhtml_legend=1 00:21:31.345 --rc geninfo_all_blocks=1 00:21:31.345 --rc geninfo_unexecuted_blocks=1 00:21:31.345 00:21:31.345 ' 00:21:31.345 10:21:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:31.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.345 --rc genhtml_branch_coverage=1 00:21:31.345 --rc genhtml_function_coverage=1 00:21:31.345 --rc genhtml_legend=1 00:21:31.345 --rc geninfo_all_blocks=1 00:21:31.345 --rc geninfo_unexecuted_blocks=1 00:21:31.345 00:21:31.345 ' 00:21:31.345 10:21:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:31.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:31.345 --rc genhtml_branch_coverage=1 00:21:31.345 --rc genhtml_function_coverage=1 00:21:31.345 --rc genhtml_legend=1 00:21:31.345 --rc geninfo_all_blocks=1 00:21:31.345 --rc geninfo_unexecuted_blocks=1 00:21:31.345 00:21:31.345 ' 00:21:31.345 10:21:50 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:31.345 10:21:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.345 10:21:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.345 10:21:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.345 10:21:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.345 10:21:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.345 10:21:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.345 10:21:50 -- paths/export.sh@5 -- # export PATH 00:21:31.345 10:21:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.345 10:21:50 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:31.345 10:21:50 -- nvmf/common.sh@7 -- # uname -s 00:21:31.345 10:21:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.345 10:21:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.345 10:21:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.345 10:21:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.345 10:21:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.345 10:21:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.345 10:21:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.345 10:21:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.345 10:21:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.345 10:21:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.345 10:21:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:21:31.345 10:21:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:21:31.345 10:21:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.345 10:21:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.345 10:21:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:31.345 10:21:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:31.345 10:21:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.345 10:21:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.345 10:21:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.345 10:21:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.345 10:21:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.346 10:21:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.346 10:21:50 -- paths/export.sh@5 -- # export PATH 00:21:31.346 10:21:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.346 10:21:50 -- nvmf/common.sh@46 -- # : 0 00:21:31.346 10:21:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:31.346 10:21:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:31.346 10:21:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:31.346 10:21:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.346 10:21:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.346 10:21:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:31.346 10:21:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:31.346 10:21:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:31.346 10:21:50 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:31.346 10:21:50 -- host/fio.sh@14 -- # nvmftestinit 00:21:31.346 10:21:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:31.346 10:21:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.346 10:21:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:31.346 10:21:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:31.346 10:21:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:31.346 10:21:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.346 10:21:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.346 10:21:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.346 10:21:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:31.346 10:21:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:31.346 10:21:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:31.346 10:21:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:31.346 10:21:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:31.346 10:21:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:31.346 10:21:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.346 10:21:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.346 10:21:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:31.346 10:21:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:31.346 10:21:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:31.346 10:21:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:31.346 10:21:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:31.346 10:21:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.346 10:21:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:31.346 10:21:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:31.346 10:21:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:31.346 10:21:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:31.346 10:21:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:31.346 10:21:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:31.346 Cannot find device "nvmf_tgt_br" 00:21:31.346 10:21:50 -- nvmf/common.sh@154 -- # true 00:21:31.346 10:21:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:31.346 Cannot find device "nvmf_tgt_br2" 00:21:31.346 10:21:50 -- nvmf/common.sh@155 -- # true 00:21:31.346 10:21:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:31.346 10:21:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:31.346 Cannot find device "nvmf_tgt_br" 00:21:31.346 10:21:50 -- nvmf/common.sh@157 -- # true 00:21:31.346 10:21:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:31.346 Cannot find device "nvmf_tgt_br2" 00:21:31.346 10:21:50 -- nvmf/common.sh@158 -- # true 00:21:31.346 10:21:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:31.346 10:21:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:31.346 10:21:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:31.346 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.346 10:21:50 -- nvmf/common.sh@161 -- # true 00:21:31.346 10:21:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:31.346 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.346 10:21:50 -- nvmf/common.sh@162 -- # true 00:21:31.346 10:21:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:31.346 10:21:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:31.346 10:21:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:31.346 10:21:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:31.346 10:21:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:31.346 10:21:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:31.605 10:21:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:31.605 10:21:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:31.605 10:21:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:31.605 10:21:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:31.605 10:21:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:31.605 10:21:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:31.605 10:21:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:31.605 10:21:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:31.605 10:21:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:31.605 10:21:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:31.605 10:21:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:31.605 10:21:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:31.605 10:21:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:31.605 10:21:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:31.605 10:21:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:31.605 10:21:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:31.605 10:21:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:31.605 10:21:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:31.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:21:31.605 00:21:31.605 --- 10.0.0.2 ping statistics --- 00:21:31.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.605 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:31.605 10:21:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:31.605 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:31.605 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:21:31.605 00:21:31.605 --- 10.0.0.3 ping statistics --- 00:21:31.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.605 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:31.605 10:21:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:31.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:21:31.605 00:21:31.605 --- 10.0.0.1 ping statistics --- 00:21:31.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.605 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:21:31.605 10:21:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.605 10:21:51 -- nvmf/common.sh@421 -- # return 0 00:21:31.605 10:21:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:31.605 10:21:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.605 10:21:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:31.605 10:21:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:31.605 10:21:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.605 10:21:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:31.605 10:21:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:31.605 10:21:51 -- host/fio.sh@16 -- # [[ y != y ]] 00:21:31.605 10:21:51 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:31.605 10:21:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:31.605 10:21:51 -- common/autotest_common.sh@10 -- # set +x 00:21:31.605 10:21:51 -- host/fio.sh@24 -- # nvmfpid=94182 00:21:31.605 10:21:51 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:31.605 10:21:51 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:31.605 10:21:51 -- host/fio.sh@28 -- # waitforlisten 94182 00:21:31.605 10:21:51 -- common/autotest_common.sh@829 -- # '[' -z 94182 ']' 00:21:31.605 10:21:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.605 10:21:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.605 10:21:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.605 10:21:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.605 10:21:51 -- common/autotest_common.sh@10 -- # set +x 00:21:31.605 [2024-11-19 10:21:51.107682] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:31.605 [2024-11-19 10:21:51.107765] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.864 [2024-11-19 10:21:51.243312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:31.864 [2024-11-19 10:21:51.292331] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:31.864 [2024-11-19 10:21:51.292849] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.864 [2024-11-19 10:21:51.293053] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.864 [2024-11-19 10:21:51.293246] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.864 [2024-11-19 10:21:51.293517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.864 [2024-11-19 10:21:51.293582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.864 [2024-11-19 10:21:51.293649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:31.864 [2024-11-19 10:21:51.293796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.798 10:21:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:32.798 10:21:52 -- common/autotest_common.sh@862 -- # return 0 00:21:32.798 10:21:52 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:33.057 [2024-11-19 10:21:52.458839] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.057 10:21:52 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:33.057 10:21:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:33.057 10:21:52 -- common/autotest_common.sh@10 -- # set +x 00:21:33.057 10:21:52 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:33.314 Malloc1 00:21:33.314 10:21:52 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:33.573 10:21:53 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:33.832 10:21:53 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:34.091 [2024-11-19 10:21:53.606957] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.091 10:21:53 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:34.657 10:21:53 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:34.657 10:21:53 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:34.657 10:21:53 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:34.657 10:21:53 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:34.657 10:21:53 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:34.657 10:21:53 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:34.657 10:21:53 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:34.657 10:21:53 -- common/autotest_common.sh@1330 -- # shift 00:21:34.657 10:21:53 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:34.657 10:21:53 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:34.657 10:21:53 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:34.657 10:21:53 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:34.657 10:21:53 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:34.657 10:21:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:34.657 10:21:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:34.657 10:21:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:34.657 10:21:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:34.657 10:21:54 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:34.657 10:21:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:34.657 10:21:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:34.657 10:21:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:34.657 10:21:54 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:34.657 10:21:54 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:34.657 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:34.657 fio-3.35 00:21:34.657 Starting 1 thread 00:21:37.184 00:21:37.184 test: (groupid=0, jobs=1): err= 0: pid=94312: Tue Nov 19 10:21:56 2024 00:21:37.184 read: IOPS=8177, BW=31.9MiB/s (33.5MB/s)(64.1MiB/2006msec) 00:21:37.184 slat (usec): min=2, max=346, avg= 2.85, stdev= 3.49 00:21:37.184 clat (usec): min=3520, max=15264, avg=8320.87, stdev=1889.47 00:21:37.184 lat (usec): min=3562, max=15269, avg=8323.72, stdev=1890.06 00:21:37.184 clat percentiles (usec): 00:21:37.184 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 6718], 20.00th=[ 6980], 00:21:37.184 | 30.00th=[ 7177], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7963], 00:21:37.184 | 70.00th=[ 8455], 80.00th=[ 9372], 90.00th=[11731], 95.00th=[12780], 00:21:37.184 | 99.00th=[13960], 99.50th=[14353], 99.90th=[15008], 99.95th=[15139], 00:21:37.184 | 99.99th=[15270] 00:21:37.184 bw ( KiB/s): min=26320, max=37776, per=99.86%, avg=32664.00, stdev=4839.17, samples=4 00:21:37.184 iops : min= 6580, max= 9444, avg=8166.00, stdev=1209.79, samples=4 00:21:37.184 write: IOPS=8175, BW=31.9MiB/s (33.5MB/s)(64.1MiB/2006msec); 0 zone resets 00:21:37.184 slat (usec): min=2, max=568, avg= 3.00, stdev= 5.33 00:21:37.184 clat (usec): min=2552, max=13316, avg=7268.14, stdev=1599.96 00:21:37.184 lat (usec): min=2566, max=13319, avg=7271.14, stdev=1600.60 00:21:37.184 clat percentiles (usec): 00:21:37.184 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:21:37.184 | 30.00th=[ 6325], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6915], 00:21:37.184 | 70.00th=[ 7439], 80.00th=[ 8225], 90.00th=[10159], 95.00th=[11076], 00:21:37.184 | 99.00th=[11994], 99.50th=[12256], 99.90th=[12649], 99.95th=[13042], 00:21:37.184 | 99.99th=[13304] 00:21:37.184 bw ( KiB/s): min=25920, max=37136, per=99.96%, avg=32692.00, stdev=4850.54, samples=4 00:21:37.184 iops : min= 6480, max= 9284, avg=8173.00, stdev=1212.63, samples=4 00:21:37.184 lat (msec) : 4=0.07%, 10=87.11%, 20=12.82% 00:21:37.184 cpu : usr=67.08%, sys=23.99%, ctx=6, majf=0, minf=5 00:21:37.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:37.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:37.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:37.184 issued rwts: total=16404,16401,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:37.184 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:37.184 00:21:37.184 Run status group 0 (all jobs): 00:21:37.184 READ: bw=31.9MiB/s (33.5MB/s), 31.9MiB/s-31.9MiB/s (33.5MB/s-33.5MB/s), io=64.1MiB (67.2MB), run=2006-2006msec 00:21:37.184 WRITE: bw=31.9MiB/s (33.5MB/s), 31.9MiB/s-31.9MiB/s (33.5MB/s-33.5MB/s), io=64.1MiB (67.2MB), run=2006-2006msec 00:21:37.184 10:21:56 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:37.184 10:21:56 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:37.185 10:21:56 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:37.185 10:21:56 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:37.185 10:21:56 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:37.185 10:21:56 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:37.185 10:21:56 -- common/autotest_common.sh@1330 -- # shift 00:21:37.185 10:21:56 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:37.185 10:21:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:37.185 10:21:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:37.185 10:21:56 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:37.185 10:21:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:37.185 10:21:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:37.185 10:21:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:37.185 10:21:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:37.185 10:21:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:37.185 10:21:56 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:37.185 10:21:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:37.185 10:21:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:37.185 10:21:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:37.185 10:21:56 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:37.185 10:21:56 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:37.185 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:37.185 fio-3.35 00:21:37.185 Starting 1 thread 00:21:39.714 00:21:39.714 test: (groupid=0, jobs=1): err= 0: pid=94356: Tue Nov 19 10:21:58 2024 00:21:39.714 read: IOPS=7106, BW=111MiB/s (116MB/s)(223MiB/2005msec) 00:21:39.714 slat (usec): min=3, max=226, avg= 4.66, stdev= 3.42 00:21:39.714 clat (usec): min=2856, max=25666, avg=10876.30, stdev=3085.84 00:21:39.714 lat (usec): min=2860, max=25683, avg=10880.96, stdev=3086.80 00:21:39.714 clat percentiles (usec): 00:21:39.714 | 1.00th=[ 5407], 5.00th=[ 6587], 10.00th=[ 7242], 20.00th=[ 8291], 00:21:39.714 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10814], 60.00th=[11469], 00:21:39.714 | 70.00th=[11994], 80.00th=[12649], 90.00th=[14222], 95.00th=[16450], 00:21:39.714 | 99.00th=[21365], 99.50th=[22676], 99.90th=[25297], 99.95th=[25560], 00:21:39.714 | 99.99th=[25560] 00:21:39.714 bw ( KiB/s): min=46656, max=63040, per=49.97%, avg=56816.00, stdev=7470.45, samples=4 00:21:39.714 iops : min= 2916, max= 3940, avg=3551.00, stdev=466.90, samples=4 00:21:39.714 write: IOPS=4224, BW=66.0MiB/s (69.2MB/s)(117MiB/1769msec); 0 zone resets 00:21:39.714 slat (usec): min=37, max=430, avg=42.68, stdev= 8.37 00:21:39.714 clat (usec): min=3920, max=32791, avg=12641.64, stdev=3252.10 00:21:39.714 lat (usec): min=3959, max=32836, avg=12684.32, stdev=3254.61 00:21:39.714 clat percentiles (usec): 00:21:39.714 | 1.00th=[ 7046], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10421], 00:21:39.714 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11994], 60.00th=[12518], 00:21:39.714 | 70.00th=[13042], 80.00th=[14091], 90.00th=[16581], 95.00th=[19792], 00:21:39.714 | 99.00th=[24249], 99.50th=[25560], 99.90th=[26870], 99.95th=[27132], 00:21:39.714 | 99.99th=[32900] 00:21:39.714 bw ( KiB/s): min=49632, max=65280, per=87.59%, avg=59208.00, stdev=7182.86, samples=4 00:21:39.714 iops : min= 3102, max= 4080, avg=3700.50, stdev=448.93, samples=4 00:21:39.715 lat (msec) : 4=0.14%, 10=30.90%, 20=65.97%, 50=2.99% 00:21:39.715 cpu : usr=68.71%, sys=20.11%, ctx=7, majf=0, minf=1 00:21:39.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:21:39.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:39.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:39.715 issued rwts: total=14248,7474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:39.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:39.715 00:21:39.715 Run status group 0 (all jobs): 00:21:39.715 READ: bw=111MiB/s (116MB/s), 111MiB/s-111MiB/s (116MB/s-116MB/s), io=223MiB (233MB), run=2005-2005msec 00:21:39.715 WRITE: bw=66.0MiB/s (69.2MB/s), 66.0MiB/s-66.0MiB/s (69.2MB/s-69.2MB/s), io=117MiB (122MB), run=1769-1769msec 00:21:39.715 10:21:58 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.715 10:21:59 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:39.715 10:21:59 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:39.715 10:21:59 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:39.715 10:21:59 -- common/autotest_common.sh@1508 -- # bdfs=() 00:21:39.715 10:21:59 -- common/autotest_common.sh@1508 -- # local bdfs 00:21:39.715 10:21:59 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:39.715 10:21:59 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:39.715 10:21:59 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:21:39.972 10:21:59 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:21:39.972 10:21:59 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:39.972 10:21:59 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:40.231 Nvme0n1 00:21:40.231 10:21:59 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:40.489 10:21:59 -- host/fio.sh@53 -- # ls_guid=90e43bc2-758f-40b4-91eb-2361f9cefa02 00:21:40.489 10:21:59 -- host/fio.sh@54 -- # get_lvs_free_mb 90e43bc2-758f-40b4-91eb-2361f9cefa02 00:21:40.489 10:21:59 -- common/autotest_common.sh@1353 -- # local lvs_uuid=90e43bc2-758f-40b4-91eb-2361f9cefa02 00:21:40.489 10:21:59 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:40.489 10:21:59 -- common/autotest_common.sh@1355 -- # local fc 00:21:40.489 10:21:59 -- common/autotest_common.sh@1356 -- # local cs 00:21:40.489 10:21:59 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:40.747 10:22:00 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:40.747 { 00:21:40.747 "base_bdev": "Nvme0n1", 00:21:40.747 "block_size": 4096, 00:21:40.747 "cluster_size": 1073741824, 00:21:40.747 "free_clusters": 4, 00:21:40.747 "name": "lvs_0", 00:21:40.747 "total_data_clusters": 4, 00:21:40.748 "uuid": "90e43bc2-758f-40b4-91eb-2361f9cefa02" 00:21:40.748 } 00:21:40.748 ]' 00:21:40.748 10:22:00 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="90e43bc2-758f-40b4-91eb-2361f9cefa02") .free_clusters' 00:21:40.748 10:22:00 -- common/autotest_common.sh@1358 -- # fc=4 00:21:40.748 10:22:00 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="90e43bc2-758f-40b4-91eb-2361f9cefa02") .cluster_size' 00:21:41.006 10:22:00 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:21:41.006 10:22:00 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:21:41.006 4096 00:21:41.006 10:22:00 -- common/autotest_common.sh@1363 -- # echo 4096 00:21:41.006 10:22:00 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:41.264 253eff8b-27cd-49ff-a5f7-97fae3f237b2 00:21:41.264 10:22:00 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:41.521 10:22:00 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:41.779 10:22:01 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:42.090 10:22:01 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:42.090 10:22:01 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:42.090 10:22:01 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:42.090 10:22:01 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:42.090 10:22:01 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:42.090 10:22:01 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:42.090 10:22:01 -- common/autotest_common.sh@1330 -- # shift 00:21:42.090 10:22:01 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:42.090 10:22:01 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:42.090 10:22:01 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:42.090 10:22:01 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:42.090 10:22:01 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:42.090 10:22:01 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:42.090 10:22:01 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:42.090 10:22:01 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:42.090 10:22:01 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:42.090 10:22:01 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:42.090 10:22:01 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:42.090 10:22:01 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:42.090 10:22:01 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:42.090 10:22:01 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:42.090 10:22:01 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:42.090 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:42.090 fio-3.35 00:21:42.090 Starting 1 thread 00:21:44.620 00:21:44.620 test: (groupid=0, jobs=1): err= 0: pid=94514: Tue Nov 19 10:22:03 2024 00:21:44.620 read: IOPS=6648, BW=26.0MiB/s (27.2MB/s)(52.2MiB/2008msec) 00:21:44.620 slat (usec): min=2, max=354, avg= 2.83, stdev= 4.08 00:21:44.620 clat (usec): min=4046, max=17484, avg=10291.93, stdev=1048.89 00:21:44.620 lat (usec): min=4056, max=17486, avg=10294.75, stdev=1048.69 00:21:44.620 clat percentiles (usec): 00:21:44.620 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9503], 00:21:44.620 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:21:44.620 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11600], 95.00th=[11994], 00:21:44.620 | 99.00th=[12911], 99.50th=[13304], 99.90th=[16188], 99.95th=[16909], 00:21:44.620 | 99.99th=[17433] 00:21:44.620 bw ( KiB/s): min=26040, max=27240, per=99.86%, avg=26558.00, stdev=513.39, samples=4 00:21:44.620 iops : min= 6510, max= 6810, avg=6639.50, stdev=128.35, samples=4 00:21:44.620 write: IOPS=6654, BW=26.0MiB/s (27.3MB/s)(52.2MiB/2008msec); 0 zone resets 00:21:44.620 slat (usec): min=2, max=301, avg= 2.96, stdev= 2.90 00:21:44.620 clat (usec): min=2480, max=16167, avg=8880.06, stdev=882.10 00:21:44.620 lat (usec): min=2491, max=16170, avg=8883.02, stdev=881.97 00:21:44.620 clat percentiles (usec): 00:21:44.620 | 1.00th=[ 6915], 5.00th=[ 7570], 10.00th=[ 7832], 20.00th=[ 8160], 00:21:44.620 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:21:44.620 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10290], 00:21:44.620 | 99.00th=[10945], 99.50th=[11469], 99.90th=[13960], 99.95th=[15139], 00:21:44.620 | 99.99th=[15926] 00:21:44.620 bw ( KiB/s): min=25920, max=27096, per=99.99%, avg=26614.00, stdev=501.62, samples=4 00:21:44.620 iops : min= 6480, max= 6774, avg=6653.50, stdev=125.40, samples=4 00:21:44.620 lat (msec) : 4=0.04%, 10=65.65%, 20=34.31% 00:21:44.620 cpu : usr=70.60%, sys=21.87%, ctx=8, majf=0, minf=5 00:21:44.620 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:44.620 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:44.620 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:44.620 issued rwts: total=13351,13362,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:44.620 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:44.620 00:21:44.620 Run status group 0 (all jobs): 00:21:44.620 READ: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=52.2MiB (54.7MB), run=2008-2008msec 00:21:44.620 WRITE: bw=26.0MiB/s (27.3MB/s), 26.0MiB/s-26.0MiB/s (27.3MB/s-27.3MB/s), io=52.2MiB (54.7MB), run=2008-2008msec 00:21:44.620 10:22:03 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:44.878 10:22:04 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:45.137 10:22:04 -- host/fio.sh@64 -- # ls_nested_guid=49184e7d-8dd2-4f9d-a74e-83c1c03434b3 00:21:45.137 10:22:04 -- host/fio.sh@65 -- # get_lvs_free_mb 49184e7d-8dd2-4f9d-a74e-83c1c03434b3 00:21:45.137 10:22:04 -- common/autotest_common.sh@1353 -- # local lvs_uuid=49184e7d-8dd2-4f9d-a74e-83c1c03434b3 00:21:45.137 10:22:04 -- common/autotest_common.sh@1354 -- # local lvs_info 00:21:45.137 10:22:04 -- common/autotest_common.sh@1355 -- # local fc 00:21:45.137 10:22:04 -- common/autotest_common.sh@1356 -- # local cs 00:21:45.137 10:22:04 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:45.396 10:22:04 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:21:45.396 { 00:21:45.396 "base_bdev": "Nvme0n1", 00:21:45.396 "block_size": 4096, 00:21:45.396 "cluster_size": 1073741824, 00:21:45.396 "free_clusters": 0, 00:21:45.396 "name": "lvs_0", 00:21:45.396 "total_data_clusters": 4, 00:21:45.396 "uuid": "90e43bc2-758f-40b4-91eb-2361f9cefa02" 00:21:45.396 }, 00:21:45.396 { 00:21:45.396 "base_bdev": "253eff8b-27cd-49ff-a5f7-97fae3f237b2", 00:21:45.396 "block_size": 4096, 00:21:45.396 "cluster_size": 4194304, 00:21:45.396 "free_clusters": 1022, 00:21:45.396 "name": "lvs_n_0", 00:21:45.396 "total_data_clusters": 1022, 00:21:45.396 "uuid": "49184e7d-8dd2-4f9d-a74e-83c1c03434b3" 00:21:45.396 } 00:21:45.396 ]' 00:21:45.396 10:22:04 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="49184e7d-8dd2-4f9d-a74e-83c1c03434b3") .free_clusters' 00:21:45.396 10:22:04 -- common/autotest_common.sh@1358 -- # fc=1022 00:21:45.396 10:22:04 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="49184e7d-8dd2-4f9d-a74e-83c1c03434b3") .cluster_size' 00:21:45.396 4088 00:21:45.396 10:22:04 -- common/autotest_common.sh@1359 -- # cs=4194304 00:21:45.396 10:22:04 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:21:45.396 10:22:04 -- common/autotest_common.sh@1363 -- # echo 4088 00:21:45.396 10:22:04 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:45.655 a9c6322d-3a66-42e7-a32c-47b5b1b68d91 00:21:45.655 10:22:05 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:45.913 10:22:05 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:46.171 10:22:05 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:46.429 10:22:05 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:46.429 10:22:05 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:46.429 10:22:05 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:46.429 10:22:05 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:46.429 10:22:05 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:46.429 10:22:05 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:46.429 10:22:05 -- common/autotest_common.sh@1330 -- # shift 00:21:46.429 10:22:05 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:46.429 10:22:05 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:46.429 10:22:05 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:46.429 10:22:05 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:46.429 10:22:05 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:46.429 10:22:05 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:46.429 10:22:05 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:46.429 10:22:05 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:46.429 10:22:05 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:46.429 10:22:05 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:46.429 10:22:05 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:46.686 10:22:05 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:46.686 10:22:05 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:46.686 10:22:05 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:46.686 10:22:05 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:46.686 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:46.686 fio-3.35 00:21:46.686 Starting 1 thread 00:21:49.270 00:21:49.270 test: (groupid=0, jobs=1): err= 0: pid=94641: Tue Nov 19 10:22:08 2024 00:21:49.270 read: IOPS=5556, BW=21.7MiB/s (22.8MB/s)(43.7MiB/2012msec) 00:21:49.270 slat (usec): min=2, max=175, avg= 2.74, stdev= 2.18 00:21:49.270 clat (usec): min=4199, max=26703, avg=12351.93, stdev=1618.09 00:21:49.270 lat (usec): min=4203, max=26706, avg=12354.67, stdev=1618.02 00:21:49.270 clat percentiles (usec): 00:21:49.270 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10552], 20.00th=[11076], 00:21:49.270 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[12518], 00:21:49.270 | 70.00th=[13042], 80.00th=[13566], 90.00th=[14353], 95.00th=[15139], 00:21:49.270 | 99.00th=[16581], 99.50th=[17171], 99.90th=[25035], 99.95th=[26346], 00:21:49.270 | 99.99th=[26608] 00:21:49.270 bw ( KiB/s): min=21608, max=22920, per=100.00%, avg=22236.00, stdev=538.03, samples=4 00:21:49.270 iops : min= 5402, max= 5730, avg=5559.00, stdev=134.51, samples=4 00:21:49.270 write: IOPS=5525, BW=21.6MiB/s (22.6MB/s)(43.4MiB/2012msec); 0 zone resets 00:21:49.270 slat (usec): min=2, max=128, avg= 2.91, stdev= 1.70 00:21:49.270 clat (usec): min=1910, max=25117, avg=10679.67, stdev=1478.21 00:21:49.270 lat (usec): min=1916, max=25119, avg=10682.57, stdev=1478.18 00:21:49.270 clat percentiles (usec): 00:21:49.270 | 1.00th=[ 8029], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9634], 00:21:49.270 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10552], 60.00th=[10814], 00:21:49.270 | 70.00th=[11207], 80.00th=[11731], 90.00th=[12387], 95.00th=[13042], 00:21:49.270 | 99.00th=[14222], 99.50th=[15533], 99.90th=[23200], 99.95th=[23725], 00:21:49.270 | 99.99th=[25035] 00:21:49.270 bw ( KiB/s): min=20992, max=23064, per=100.00%, avg=22102.00, stdev=869.65, samples=4 00:21:49.270 iops : min= 5248, max= 5766, avg=5525.50, stdev=217.41, samples=4 00:21:49.270 lat (msec) : 2=0.01%, 4=0.04%, 10=18.01%, 20=81.69%, 50=0.25% 00:21:49.270 cpu : usr=72.00%, sys=21.63%, ctx=6, majf=0, minf=5 00:21:49.270 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:49.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:49.271 issued rwts: total=11179,11117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.271 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:49.271 00:21:49.271 Run status group 0 (all jobs): 00:21:49.271 READ: bw=21.7MiB/s (22.8MB/s), 21.7MiB/s-21.7MiB/s (22.8MB/s-22.8MB/s), io=43.7MiB (45.8MB), run=2012-2012msec 00:21:49.271 WRITE: bw=21.6MiB/s (22.6MB/s), 21.6MiB/s-21.6MiB/s (22.6MB/s-22.6MB/s), io=43.4MiB (45.5MB), run=2012-2012msec 00:21:49.271 10:22:08 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:49.271 10:22:08 -- host/fio.sh@74 -- # sync 00:21:49.271 10:22:08 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:49.529 10:22:08 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:49.786 10:22:09 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:50.045 10:22:09 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:50.303 10:22:09 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:50.869 10:22:10 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:50.869 10:22:10 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:50.869 10:22:10 -- host/fio.sh@86 -- # nvmftestfini 00:21:50.869 10:22:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:50.869 10:22:10 -- nvmf/common.sh@116 -- # sync 00:21:50.869 10:22:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:50.869 10:22:10 -- nvmf/common.sh@119 -- # set +e 00:21:50.869 10:22:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:50.869 10:22:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:50.869 rmmod nvme_tcp 00:21:50.869 rmmod nvme_fabrics 00:21:50.869 rmmod nvme_keyring 00:21:50.869 10:22:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:50.869 10:22:10 -- nvmf/common.sh@123 -- # set -e 00:21:50.870 10:22:10 -- nvmf/common.sh@124 -- # return 0 00:21:50.870 10:22:10 -- nvmf/common.sh@477 -- # '[' -n 94182 ']' 00:21:50.870 10:22:10 -- nvmf/common.sh@478 -- # killprocess 94182 00:21:50.870 10:22:10 -- common/autotest_common.sh@936 -- # '[' -z 94182 ']' 00:21:50.870 10:22:10 -- common/autotest_common.sh@940 -- # kill -0 94182 00:21:50.870 10:22:10 -- common/autotest_common.sh@941 -- # uname 00:21:50.870 10:22:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:50.870 10:22:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94182 00:21:50.870 killing process with pid 94182 00:21:50.870 10:22:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:50.870 10:22:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:50.870 10:22:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94182' 00:21:50.870 10:22:10 -- common/autotest_common.sh@955 -- # kill 94182 00:21:50.870 10:22:10 -- common/autotest_common.sh@960 -- # wait 94182 00:21:51.129 10:22:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:51.129 10:22:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:51.129 10:22:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:51.129 10:22:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:51.129 10:22:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:51.129 10:22:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.129 10:22:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.129 10:22:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.129 10:22:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:51.129 00:21:51.129 real 0m20.029s 00:21:51.129 user 1m28.573s 00:21:51.129 sys 0m4.418s 00:21:51.129 10:22:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:51.129 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:21:51.129 ************************************ 00:21:51.129 END TEST nvmf_fio_host 00:21:51.129 ************************************ 00:21:51.129 10:22:10 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:51.129 10:22:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:51.129 10:22:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:51.129 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:21:51.129 ************************************ 00:21:51.129 START TEST nvmf_failover 00:21:51.129 ************************************ 00:21:51.129 10:22:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:51.129 * Looking for test storage... 00:21:51.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:51.129 10:22:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:51.129 10:22:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:51.129 10:22:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:51.386 10:22:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:51.386 10:22:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:51.386 10:22:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:51.386 10:22:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:51.386 10:22:10 -- scripts/common.sh@335 -- # IFS=.-: 00:21:51.386 10:22:10 -- scripts/common.sh@335 -- # read -ra ver1 00:21:51.386 10:22:10 -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.386 10:22:10 -- scripts/common.sh@336 -- # read -ra ver2 00:21:51.386 10:22:10 -- scripts/common.sh@337 -- # local 'op=<' 00:21:51.386 10:22:10 -- scripts/common.sh@339 -- # ver1_l=2 00:21:51.386 10:22:10 -- scripts/common.sh@340 -- # ver2_l=1 00:21:51.386 10:22:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:51.386 10:22:10 -- scripts/common.sh@343 -- # case "$op" in 00:21:51.386 10:22:10 -- scripts/common.sh@344 -- # : 1 00:21:51.386 10:22:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:51.386 10:22:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.386 10:22:10 -- scripts/common.sh@364 -- # decimal 1 00:21:51.386 10:22:10 -- scripts/common.sh@352 -- # local d=1 00:21:51.386 10:22:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.386 10:22:10 -- scripts/common.sh@354 -- # echo 1 00:21:51.386 10:22:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:51.386 10:22:10 -- scripts/common.sh@365 -- # decimal 2 00:21:51.386 10:22:10 -- scripts/common.sh@352 -- # local d=2 00:21:51.386 10:22:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.386 10:22:10 -- scripts/common.sh@354 -- # echo 2 00:21:51.386 10:22:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:51.386 10:22:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:51.386 10:22:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:51.386 10:22:10 -- scripts/common.sh@367 -- # return 0 00:21:51.386 10:22:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.386 10:22:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:51.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.386 --rc genhtml_branch_coverage=1 00:21:51.386 --rc genhtml_function_coverage=1 00:21:51.386 --rc genhtml_legend=1 00:21:51.386 --rc geninfo_all_blocks=1 00:21:51.386 --rc geninfo_unexecuted_blocks=1 00:21:51.386 00:21:51.386 ' 00:21:51.386 10:22:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:51.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.386 --rc genhtml_branch_coverage=1 00:21:51.386 --rc genhtml_function_coverage=1 00:21:51.386 --rc genhtml_legend=1 00:21:51.386 --rc geninfo_all_blocks=1 00:21:51.386 --rc geninfo_unexecuted_blocks=1 00:21:51.386 00:21:51.386 ' 00:21:51.386 10:22:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:51.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.386 --rc genhtml_branch_coverage=1 00:21:51.386 --rc genhtml_function_coverage=1 00:21:51.386 --rc genhtml_legend=1 00:21:51.386 --rc geninfo_all_blocks=1 00:21:51.386 --rc geninfo_unexecuted_blocks=1 00:21:51.386 00:21:51.386 ' 00:21:51.386 10:22:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:51.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.386 --rc genhtml_branch_coverage=1 00:21:51.386 --rc genhtml_function_coverage=1 00:21:51.386 --rc genhtml_legend=1 00:21:51.386 --rc geninfo_all_blocks=1 00:21:51.386 --rc geninfo_unexecuted_blocks=1 00:21:51.386 00:21:51.386 ' 00:21:51.386 10:22:10 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:51.386 10:22:10 -- nvmf/common.sh@7 -- # uname -s 00:21:51.386 10:22:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.386 10:22:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.386 10:22:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.386 10:22:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.386 10:22:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.386 10:22:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.386 10:22:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.386 10:22:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.386 10:22:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.386 10:22:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.386 10:22:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:21:51.386 10:22:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:21:51.386 10:22:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.386 10:22:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.386 10:22:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:51.386 10:22:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:51.386 10:22:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.386 10:22:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.386 10:22:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.386 10:22:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.386 10:22:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.386 10:22:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.386 10:22:10 -- paths/export.sh@5 -- # export PATH 00:21:51.386 10:22:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.386 10:22:10 -- nvmf/common.sh@46 -- # : 0 00:21:51.386 10:22:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:51.386 10:22:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:51.386 10:22:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:51.386 10:22:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.386 10:22:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.386 10:22:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:51.386 10:22:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:51.386 10:22:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:51.386 10:22:10 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:51.386 10:22:10 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:51.386 10:22:10 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:51.386 10:22:10 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:51.386 10:22:10 -- host/failover.sh@18 -- # nvmftestinit 00:21:51.386 10:22:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:51.386 10:22:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.386 10:22:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:51.387 10:22:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:51.387 10:22:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:51.387 10:22:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.387 10:22:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.387 10:22:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.387 10:22:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:51.387 10:22:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:51.387 10:22:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:51.387 10:22:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:51.387 10:22:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:51.387 10:22:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:51.387 10:22:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.387 10:22:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.387 10:22:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:51.387 10:22:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:51.387 10:22:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:51.387 10:22:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:51.387 10:22:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:51.387 10:22:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.387 10:22:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:51.387 10:22:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:51.387 10:22:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:51.387 10:22:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:51.387 10:22:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:51.387 10:22:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:51.387 Cannot find device "nvmf_tgt_br" 00:21:51.387 10:22:10 -- nvmf/common.sh@154 -- # true 00:21:51.387 10:22:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:51.387 Cannot find device "nvmf_tgt_br2" 00:21:51.387 10:22:10 -- nvmf/common.sh@155 -- # true 00:21:51.387 10:22:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:51.387 10:22:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:51.387 Cannot find device "nvmf_tgt_br" 00:21:51.387 10:22:10 -- nvmf/common.sh@157 -- # true 00:21:51.387 10:22:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:51.387 Cannot find device "nvmf_tgt_br2" 00:21:51.387 10:22:10 -- nvmf/common.sh@158 -- # true 00:21:51.387 10:22:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:51.387 10:22:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:51.387 10:22:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:51.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:51.387 10:22:10 -- nvmf/common.sh@161 -- # true 00:21:51.387 10:22:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:51.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:51.387 10:22:10 -- nvmf/common.sh@162 -- # true 00:21:51.387 10:22:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:51.387 10:22:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:51.387 10:22:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:51.387 10:22:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:51.387 10:22:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:51.645 10:22:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:51.645 10:22:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:51.645 10:22:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:51.645 10:22:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:51.645 10:22:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:51.645 10:22:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:51.645 10:22:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:51.645 10:22:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:51.645 10:22:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:51.645 10:22:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:51.645 10:22:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:51.645 10:22:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:51.645 10:22:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:51.645 10:22:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:51.645 10:22:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:51.645 10:22:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:51.645 10:22:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:51.645 10:22:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:51.645 10:22:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:51.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:21:51.645 00:21:51.645 --- 10.0.0.2 ping statistics --- 00:21:51.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.645 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:21:51.645 10:22:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:51.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:51.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:21:51.645 00:21:51.645 --- 10.0.0.3 ping statistics --- 00:21:51.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.645 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:51.645 10:22:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:51.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:51.645 00:21:51.645 --- 10.0.0.1 ping statistics --- 00:21:51.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.645 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:51.645 10:22:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.645 10:22:11 -- nvmf/common.sh@421 -- # return 0 00:21:51.645 10:22:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:51.645 10:22:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.645 10:22:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:51.645 10:22:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:51.645 10:22:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.645 10:22:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:51.645 10:22:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:51.645 10:22:11 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:51.645 10:22:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:51.645 10:22:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:51.645 10:22:11 -- common/autotest_common.sh@10 -- # set +x 00:21:51.645 10:22:11 -- nvmf/common.sh@469 -- # nvmfpid=94910 00:21:51.645 10:22:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:51.645 10:22:11 -- nvmf/common.sh@470 -- # waitforlisten 94910 00:21:51.645 10:22:11 -- common/autotest_common.sh@829 -- # '[' -z 94910 ']' 00:21:51.645 10:22:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.645 10:22:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.645 10:22:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.645 10:22:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.645 10:22:11 -- common/autotest_common.sh@10 -- # set +x 00:21:51.904 [2024-11-19 10:22:11.195141] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:51.904 [2024-11-19 10:22:11.195260] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.904 [2024-11-19 10:22:11.336639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:51.904 [2024-11-19 10:22:11.377493] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:51.904 [2024-11-19 10:22:11.377666] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.904 [2024-11-19 10:22:11.377684] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.904 [2024-11-19 10:22:11.377695] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.904 [2024-11-19 10:22:11.377798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.904 [2024-11-19 10:22:11.378230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.904 [2024-11-19 10:22:11.378243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.839 10:22:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.839 10:22:12 -- common/autotest_common.sh@862 -- # return 0 00:21:52.839 10:22:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:52.839 10:22:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.839 10:22:12 -- common/autotest_common.sh@10 -- # set +x 00:21:52.839 10:22:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.839 10:22:12 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:53.097 [2024-11-19 10:22:12.496588] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.097 10:22:12 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:53.356 Malloc0 00:21:53.356 10:22:12 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:53.614 10:22:13 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:53.872 10:22:13 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:54.130 [2024-11-19 10:22:13.510399] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.130 10:22:13 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:54.426 [2024-11-19 10:22:13.810685] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:54.426 10:22:13 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:54.715 [2024-11-19 10:22:14.123011] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:54.715 10:22:14 -- host/failover.sh@31 -- # bdevperf_pid=95029 00:21:54.715 10:22:14 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:54.715 10:22:14 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:54.715 10:22:14 -- host/failover.sh@34 -- # waitforlisten 95029 /var/tmp/bdevperf.sock 00:21:54.715 10:22:14 -- common/autotest_common.sh@829 -- # '[' -z 95029 ']' 00:21:54.715 10:22:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:54.715 10:22:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:54.715 10:22:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:54.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:54.715 10:22:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:54.715 10:22:14 -- common/autotest_common.sh@10 -- # set +x 00:21:56.090 10:22:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:56.090 10:22:15 -- common/autotest_common.sh@862 -- # return 0 00:21:56.090 10:22:15 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:56.090 NVMe0n1 00:21:56.090 10:22:15 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:56.657 00:21:56.657 10:22:15 -- host/failover.sh@39 -- # run_test_pid=95077 00:21:56.657 10:22:15 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:56.657 10:22:15 -- host/failover.sh@41 -- # sleep 1 00:21:57.592 10:22:16 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:57.851 [2024-11-19 10:22:17.227215] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227273] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227310] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227319] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227327] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227335] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227343] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227359] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227367] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227375] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227392] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227400] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227408] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227416] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227424] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227432] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227465] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.851 [2024-11-19 10:22:17.227473] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227481] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227489] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227506] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227522] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227530] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227547] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227556] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227564] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227572] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227614] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227622] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227630] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227638] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227646] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227654] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227679] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 [2024-11-19 10:22:17.227687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc2ab0 is same with the state(5) to be set 00:21:57.852 10:22:17 -- host/failover.sh@45 -- # sleep 3 00:22:01.139 10:22:20 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:01.139 00:22:01.139 10:22:20 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:01.397 [2024-11-19 10:22:20.827011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827090] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827131] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827139] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827155] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827163] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827171] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827179] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827195] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827203] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827211] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827220] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827244] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827252] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827261] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827270] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827287] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827329] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827339] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.397 [2024-11-19 10:22:20.827347] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827356] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827364] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827372] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827380] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827397] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827405] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827421] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827429] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827437] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827445] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827453] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827470] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827478] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827486] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827494] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827502] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 [2024-11-19 10:22:20.827529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc3920 is same with the state(5) to be set 00:22:01.398 10:22:20 -- host/failover.sh@50 -- # sleep 3 00:22:04.680 10:22:23 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.680 [2024-11-19 10:22:24.140476] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.680 10:22:24 -- host/failover.sh@55 -- # sleep 1 00:22:06.056 10:22:25 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:06.056 [2024-11-19 10:22:25.413423] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413497] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413521] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413535] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413547] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413599] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413675] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413687] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413699] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413750] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413776] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413789] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413815] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413863] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413878] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413966] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.413990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414002] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414014] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414066] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414079] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414104] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414128] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414141] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414156] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414184] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414199] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414214] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414228] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414241] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414266] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414279] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414293] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414307] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414334] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414346] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414361] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414376] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414420] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414433] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.056 [2024-11-19 10:22:25.414446] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.057 [2024-11-19 10:22:25.414458] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.057 [2024-11-19 10:22:25.414468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5040 is same with the state(5) to be set 00:22:06.057 10:22:25 -- host/failover.sh@59 -- # wait 95077 00:22:12.687 0 00:22:12.687 10:22:31 -- host/failover.sh@61 -- # killprocess 95029 00:22:12.687 10:22:31 -- common/autotest_common.sh@936 -- # '[' -z 95029 ']' 00:22:12.687 10:22:31 -- common/autotest_common.sh@940 -- # kill -0 95029 00:22:12.687 10:22:31 -- common/autotest_common.sh@941 -- # uname 00:22:12.687 10:22:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:12.687 10:22:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95029 00:22:12.687 10:22:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:12.687 killing process with pid 95029 00:22:12.687 10:22:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:12.687 10:22:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95029' 00:22:12.687 10:22:31 -- common/autotest_common.sh@955 -- # kill 95029 00:22:12.687 10:22:31 -- common/autotest_common.sh@960 -- # wait 95029 00:22:12.687 10:22:31 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:12.687 [2024-11-19 10:22:14.192629] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:12.687 [2024-11-19 10:22:14.192736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95029 ] 00:22:12.687 [2024-11-19 10:22:14.321763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.687 [2024-11-19 10:22:14.357116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.687 Running I/O for 15 seconds... 00:22:12.687 [2024-11-19 10:22:17.227891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.687 [2024-11-19 10:22:17.227937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.687 [2024-11-19 10:22:17.227965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.687 [2024-11-19 10:22:17.227982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.687 [2024-11-19 10:22:17.227999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.687 [2024-11-19 10:22:17.228014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.687 [2024-11-19 10:22:17.228030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.687 [2024-11-19 10:22:17.228044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.687 [2024-11-19 10:22:17.228060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.687 [2024-11-19 10:22:17.228075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.687 [2024-11-19 10:22:17.228096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.687 [2024-11-19 10:22:17.228111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.687 [2024-11-19 10:22:17.228127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.687 [2024-11-19 10:22:17.228141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.687 [2024-11-19 10:22:17.228158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.687 [2024-11-19 10:22:17.228172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.687 [2024-11-19 10:22:17.228188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.687 [2024-11-19 10:22:17.228202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.687 [2024-11-19 10:22:17.228218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.688 [2024-11-19 10:22:17.228861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.688 [2024-11-19 10:22:17.228891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.228952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.688 [2024-11-19 10:22:17.228982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.228998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.688 [2024-11-19 10:22:17.229012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.229027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.688 [2024-11-19 10:22:17.229041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.229057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.688 [2024-11-19 10:22:17.229071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.229094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.688 [2024-11-19 10:22:17.229109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.229125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.229139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.229155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.229169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.229185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.229198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.229214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.688 [2024-11-19 10:22:17.229228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.229244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.229258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.229273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.688 [2024-11-19 10:22:17.229287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.229303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.688 [2024-11-19 10:22:17.229317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.229333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.688 [2024-11-19 10:22:17.229348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.229363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.229377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.229393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.229407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.229423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.688 [2024-11-19 10:22:17.229437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.688 [2024-11-19 10:22:17.229453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.229472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.229489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.689 [2024-11-19 10:22:17.229503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.229519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.229543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.229561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.229575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.229591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.229605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.229621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:111728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.229635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.229651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.229665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.229680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.229694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.229710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.229724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.229740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:111768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.229754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.229769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.689 [2024-11-19 10:22:17.229783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.229799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.229813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.229842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.689 [2024-11-19 10:22:17.229857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.229874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.229897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.229916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.229930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.229946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.689 [2024-11-19 10:22:17.229961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.229978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.229992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.689 [2024-11-19 10:22:17.230022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.689 [2024-11-19 10:22:17.230052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.689 [2024-11-19 10:22:17.230082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.689 [2024-11-19 10:22:17.230112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.689 [2024-11-19 10:22:17.230142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.689 [2024-11-19 10:22:17.230172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.230202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.230231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:112448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.689 [2024-11-19 10:22:17.230261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.689 [2024-11-19 10:22:17.230299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.230329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.689 [2024-11-19 10:22:17.230359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.230389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.689 [2024-11-19 10:22:17.230421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.230451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.230481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.230511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.230542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.230572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.230602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.230632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.230669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.689 [2024-11-19 10:22:17.230685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.689 [2024-11-19 10:22:17.230699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.230715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.230729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.230745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.230759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.230775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.230790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.230805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.690 [2024-11-19 10:22:17.230831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.230849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.230864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.230880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.230893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.230911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.690 [2024-11-19 10:22:17.230925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.230941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.230955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.230971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.230997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.690 [2024-11-19 10:22:17.231253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.690 [2024-11-19 10:22:17.231314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.690 [2024-11-19 10:22:17.231344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.690 [2024-11-19 10:22:17.231374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.690 [2024-11-19 10:22:17.231404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.690 [2024-11-19 10:22:17.231437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.690 [2024-11-19 10:22:17.231474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.690 [2024-11-19 10:22:17.231594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.690 [2024-11-19 10:22:17.231715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.690 [2024-11-19 10:22:17.231955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.690 [2024-11-19 10:22:17.231970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87ed40 is same with the state(5) to be set 00:22:12.691 [2024-11-19 10:22:17.231987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.691 [2024-11-19 10:22:17.231998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.691 [2024-11-19 10:22:17.232009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112144 len:8 PRP1 0x0 PRP2 0x0 00:22:12.691 [2024-11-19 10:22:17.232022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:17.232090] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x87ed40 was disconnected and freed. reset controller. 00:22:12.691 [2024-11-19 10:22:17.232117] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:12.691 [2024-11-19 10:22:17.232182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.691 [2024-11-19 10:22:17.232205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:17.232221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.691 [2024-11-19 10:22:17.232235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:17.232249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.691 [2024-11-19 10:22:17.232263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:17.232277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.691 [2024-11-19 10:22:17.232291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:17.232304] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:12.691 [2024-11-19 10:22:17.232351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x84c940 (9): Bad file descriptor 00:22:12.691 [2024-11-19 10:22:17.234949] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:12.691 [2024-11-19 10:22:17.271093] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:12.691 [2024-11-19 10:22:20.827630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.827680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.827726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.827744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.827761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.827775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.827792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.827806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.827835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.827853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.827869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.827884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.827900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.827924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.827939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.827953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.827969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.827983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.827999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.691 [2024-11-19 10:22:20.828665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.691 [2024-11-19 10:22:20.828679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.828696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.828711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.828727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.692 [2024-11-19 10:22:20.828742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.828758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.828772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.828788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.828803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.828829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.828845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.828862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.828877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.828893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.828907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.828923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.828937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.828961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.828986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.829017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.829047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.829077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.829107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.829138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.829168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.829198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.829229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.829264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.829294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.692 [2024-11-19 10:22:20.829324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.692 [2024-11-19 10:22:20.829361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.692 [2024-11-19 10:22:20.829392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.692 [2024-11-19 10:22:20.829422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.692 [2024-11-19 10:22:20.829452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.692 [2024-11-19 10:22:20.829482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.692 [2024-11-19 10:22:20.829521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.829551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.829581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.692 [2024-11-19 10:22:20.829612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.829642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.829672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.829702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.692 [2024-11-19 10:22:20.829718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.692 [2024-11-19 10:22:20.829732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.829756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.829772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.829788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.829803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.829829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.829845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.829861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.829876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.829892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.829906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.829922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.829936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.829953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.829967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.829983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.829997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.830028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.830089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.830218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.830249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.830404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.830652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.830712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.830743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.830775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.830806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.830848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.830879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.830910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.693 [2024-11-19 10:22:20.830941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.830965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.693 [2024-11-19 10:22:20.830989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.693 [2024-11-19 10:22:20.831007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.694 [2024-11-19 10:22:20.831021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:20.831052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:20.831084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.694 [2024-11-19 10:22:20.831115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.694 [2024-11-19 10:22:20.831146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:20.831176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.694 [2024-11-19 10:22:20.831217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.694 [2024-11-19 10:22:20.831247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.694 [2024-11-19 10:22:20.831277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:20.831310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.694 [2024-11-19 10:22:20.831340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:20.831377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:20.831409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:20.831439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.694 [2024-11-19 10:22:20.831470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.694 [2024-11-19 10:22:20.831500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:20.831531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:20.831561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:20.831591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:20.831622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:20.831652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:20.831682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:20.831712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:20.831742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x858d90 is same with the state(5) to be set 00:22:12.694 [2024-11-19 10:22:20.831795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.694 [2024-11-19 10:22:20.831808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.694 [2024-11-19 10:22:20.831830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85760 len:8 PRP1 0x0 PRP2 0x0 00:22:12.694 [2024-11-19 10:22:20.831847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.831892] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x858d90 was disconnected and freed. reset controller. 00:22:12.694 [2024-11-19 10:22:20.831911] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:12.694 [2024-11-19 10:22:20.831967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.694 [2024-11-19 10:22:20.831990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.832006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.694 [2024-11-19 10:22:20.832020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.832035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.694 [2024-11-19 10:22:20.832050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.832064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.694 [2024-11-19 10:22:20.832078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:20.832092] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:12.694 [2024-11-19 10:22:20.834688] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:12.694 [2024-11-19 10:22:20.834728] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x84c940 (9): Bad file descriptor 00:22:12.694 [2024-11-19 10:22:20.865076] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:12.694 [2024-11-19 10:22:25.414583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:25.414647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:25.414677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:25.414694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:25.414711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:25.414726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:25.414742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:25.414756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:25.414772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:25.414816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:25.414854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:25.414869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:25.414886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.694 [2024-11-19 10:22:25.414901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.694 [2024-11-19 10:22:25.414917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.414931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.414947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.414961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.414977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.415979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.415995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.416009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.416033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.416049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.416065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.695 [2024-11-19 10:22:25.416079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.416095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.695 [2024-11-19 10:22:25.416110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.416125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.416140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.695 [2024-11-19 10:22:25.416156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.695 [2024-11-19 10:22:25.416170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.696 [2024-11-19 10:22:25.416448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.696 [2024-11-19 10:22:25.416507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.696 [2024-11-19 10:22:25.416538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.696 [2024-11-19 10:22:25.416657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.696 [2024-11-19 10:22:25.416783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.696 [2024-11-19 10:22:25.416829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.416982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.416998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.417013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.417028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.417042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.417058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.417071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.417087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.417101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.417117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.417131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.417146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.696 [2024-11-19 10:22:25.417160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.417176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.696 [2024-11-19 10:22:25.417190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.417214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.696 [2024-11-19 10:22:25.417229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.696 [2024-11-19 10:22:25.417245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.417259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.417289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.417318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.417347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.417377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.417407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.697 [2024-11-19 10:22:25.417437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.417471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.417501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.417531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.417560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.417597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.417627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.417657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.417687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.697 [2024-11-19 10:22:25.417716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.417745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.697 [2024-11-19 10:22:25.417782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.697 [2024-11-19 10:22:25.417813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.417856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.697 [2024-11-19 10:22:25.417887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.697 [2024-11-19 10:22:25.417917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.697 [2024-11-19 10:22:25.417948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.417964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.697 [2024-11-19 10:22:25.417988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.418011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.697 [2024-11-19 10:22:25.418027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.418044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.418058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.418074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.418088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.418104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.697 [2024-11-19 10:22:25.418118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.418134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.697 [2024-11-19 10:22:25.418148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.418164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.418178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.418193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.418207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.418222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.418236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.418252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.697 [2024-11-19 10:22:25.418266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.418282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.697 [2024-11-19 10:22:25.418297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.418314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.418328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.418344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.697 [2024-11-19 10:22:25.418358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.418374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.418388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.418410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.418425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.418441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.418455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.697 [2024-11-19 10:22:25.418471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.697 [2024-11-19 10:22:25.418485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.698 [2024-11-19 10:22:25.418501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.698 [2024-11-19 10:22:25.418526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.698 [2024-11-19 10:22:25.418545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.698 [2024-11-19 10:22:25.418560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.698 [2024-11-19 10:22:25.418576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.698 [2024-11-19 10:22:25.418590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.698 [2024-11-19 10:22:25.418605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.698 [2024-11-19 10:22:25.418619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.698 [2024-11-19 10:22:25.418635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.698 [2024-11-19 10:22:25.418649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.698 [2024-11-19 10:22:25.418666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.698 [2024-11-19 10:22:25.418680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.698 [2024-11-19 10:22:25.418695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8470d0 is same with the state(5) to be set 00:22:12.698 [2024-11-19 10:22:25.418712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.698 [2024-11-19 10:22:25.418724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.698 [2024-11-19 10:22:25.418734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22896 len:8 PRP1 0x0 PRP2 0x0 00:22:12.698 [2024-11-19 10:22:25.418748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.698 [2024-11-19 10:22:25.418796] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8470d0 was disconnected and freed. reset controller. 00:22:12.698 [2024-11-19 10:22:25.418815] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:12.698 [2024-11-19 10:22:25.418887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.698 [2024-11-19 10:22:25.418922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.698 [2024-11-19 10:22:25.418939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.698 [2024-11-19 10:22:25.418953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.698 [2024-11-19 10:22:25.418967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.698 [2024-11-19 10:22:25.418992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.698 [2024-11-19 10:22:25.419010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:12.698 [2024-11-19 10:22:25.419024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.698 [2024-11-19 10:22:25.419038] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:12.698 [2024-11-19 10:22:25.419094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x84c940 (9): Bad file descriptor 00:22:12.698 [2024-11-19 10:22:25.421879] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:12.698 [2024-11-19 10:22:25.453816] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:12.698 00:22:12.698 Latency(us) 00:22:12.698 [2024-11-19T10:22:32.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.698 [2024-11-19T10:22:32.244Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:12.698 Verification LBA range: start 0x0 length 0x4000 00:22:12.698 NVMe0n1 : 15.01 12355.83 48.26 290.31 0.00 10101.96 659.08 17277.67 00:22:12.698 [2024-11-19T10:22:32.244Z] =================================================================================================================== 00:22:12.698 [2024-11-19T10:22:32.244Z] Total : 12355.83 48.26 290.31 0.00 10101.96 659.08 17277.67 00:22:12.698 Received shutdown signal, test time was about 15.000000 seconds 00:22:12.698 00:22:12.698 Latency(us) 00:22:12.698 [2024-11-19T10:22:32.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.698 [2024-11-19T10:22:32.244Z] =================================================================================================================== 00:22:12.698 [2024-11-19T10:22:32.244Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:12.698 10:22:31 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:12.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.698 10:22:31 -- host/failover.sh@65 -- # count=3 00:22:12.698 10:22:31 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:12.698 10:22:31 -- host/failover.sh@73 -- # bdevperf_pid=95280 00:22:12.698 10:22:31 -- host/failover.sh@75 -- # waitforlisten 95280 /var/tmp/bdevperf.sock 00:22:12.698 10:22:31 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:12.698 10:22:31 -- common/autotest_common.sh@829 -- # '[' -z 95280 ']' 00:22:12.698 10:22:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.698 10:22:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.698 10:22:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.698 10:22:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.698 10:22:31 -- common/autotest_common.sh@10 -- # set +x 00:22:12.956 10:22:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.956 10:22:32 -- common/autotest_common.sh@862 -- # return 0 00:22:12.956 10:22:32 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:13.213 [2024-11-19 10:22:32.643182] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:13.213 10:22:32 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:13.472 [2024-11-19 10:22:32.931459] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:13.472 10:22:32 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:14.038 NVMe0n1 00:22:14.038 10:22:33 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:14.296 00:22:14.296 10:22:33 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:14.556 00:22:14.556 10:22:33 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:14.556 10:22:33 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:14.813 10:22:34 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:15.072 10:22:34 -- host/failover.sh@87 -- # sleep 3 00:22:18.355 10:22:37 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:18.355 10:22:37 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:18.355 10:22:37 -- host/failover.sh@90 -- # run_test_pid=95428 00:22:18.355 10:22:37 -- host/failover.sh@92 -- # wait 95428 00:22:18.355 10:22:37 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:19.734 0 00:22:19.734 10:22:39 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:19.734 [2024-11-19 10:22:31.394189] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:19.734 [2024-11-19 10:22:31.394313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95280 ] 00:22:19.734 [2024-11-19 10:22:31.526075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.734 [2024-11-19 10:22:31.561395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.734 [2024-11-19 10:22:34.481230] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:19.734 [2024-11-19 10:22:34.481372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.734 [2024-11-19 10:22:34.481400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.734 [2024-11-19 10:22:34.481419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.734 [2024-11-19 10:22:34.481433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.734 [2024-11-19 10:22:34.481448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.734 [2024-11-19 10:22:34.481463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.734 [2024-11-19 10:22:34.481478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:19.734 [2024-11-19 10:22:34.481492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:19.734 [2024-11-19 10:22:34.481507] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:19.734 [2024-11-19 10:22:34.481562] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:19.734 [2024-11-19 10:22:34.481598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6dd940 (9): Bad file descriptor 00:22:19.734 [2024-11-19 10:22:34.490919] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:19.734 Running I/O for 1 seconds... 00:22:19.734 00:22:19.734 Latency(us) 00:22:19.734 [2024-11-19T10:22:39.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.734 [2024-11-19T10:22:39.280Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:19.734 Verification LBA range: start 0x0 length 0x4000 00:22:19.734 NVMe0n1 : 1.01 12725.28 49.71 0.00 0.00 10006.11 1787.35 15847.80 00:22:19.734 [2024-11-19T10:22:39.280Z] =================================================================================================================== 00:22:19.734 [2024-11-19T10:22:39.280Z] Total : 12725.28 49.71 0.00 0.00 10006.11 1787.35 15847.80 00:22:19.734 10:22:39 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:19.734 10:22:39 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:19.992 10:22:39 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:20.251 10:22:39 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:20.251 10:22:39 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:20.509 10:22:39 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:20.767 10:22:40 -- host/failover.sh@101 -- # sleep 3 00:22:24.053 10:22:43 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:24.053 10:22:43 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:24.053 10:22:43 -- host/failover.sh@108 -- # killprocess 95280 00:22:24.053 10:22:43 -- common/autotest_common.sh@936 -- # '[' -z 95280 ']' 00:22:24.053 10:22:43 -- common/autotest_common.sh@940 -- # kill -0 95280 00:22:24.053 10:22:43 -- common/autotest_common.sh@941 -- # uname 00:22:24.053 10:22:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:24.053 10:22:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95280 00:22:24.053 killing process with pid 95280 00:22:24.053 10:22:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:24.053 10:22:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:24.053 10:22:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95280' 00:22:24.053 10:22:43 -- common/autotest_common.sh@955 -- # kill 95280 00:22:24.053 10:22:43 -- common/autotest_common.sh@960 -- # wait 95280 00:22:24.311 10:22:43 -- host/failover.sh@110 -- # sync 00:22:24.311 10:22:43 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:24.577 10:22:43 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:24.577 10:22:43 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:24.577 10:22:43 -- host/failover.sh@116 -- # nvmftestfini 00:22:24.577 10:22:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:24.577 10:22:43 -- nvmf/common.sh@116 -- # sync 00:22:24.577 10:22:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:24.577 10:22:43 -- nvmf/common.sh@119 -- # set +e 00:22:24.577 10:22:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:24.577 10:22:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:24.577 rmmod nvme_tcp 00:22:24.577 rmmod nvme_fabrics 00:22:24.577 rmmod nvme_keyring 00:22:24.577 10:22:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:24.577 10:22:44 -- nvmf/common.sh@123 -- # set -e 00:22:24.577 10:22:44 -- nvmf/common.sh@124 -- # return 0 00:22:24.577 10:22:44 -- nvmf/common.sh@477 -- # '[' -n 94910 ']' 00:22:24.577 10:22:44 -- nvmf/common.sh@478 -- # killprocess 94910 00:22:24.577 10:22:44 -- common/autotest_common.sh@936 -- # '[' -z 94910 ']' 00:22:24.577 10:22:44 -- common/autotest_common.sh@940 -- # kill -0 94910 00:22:24.577 10:22:44 -- common/autotest_common.sh@941 -- # uname 00:22:24.577 10:22:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:24.577 10:22:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94910 00:22:24.577 10:22:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:24.577 killing process with pid 94910 00:22:24.577 10:22:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:24.577 10:22:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94910' 00:22:24.577 10:22:44 -- common/autotest_common.sh@955 -- # kill 94910 00:22:24.577 10:22:44 -- common/autotest_common.sh@960 -- # wait 94910 00:22:24.848 10:22:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:24.848 10:22:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:24.848 10:22:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:24.848 10:22:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:24.848 10:22:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:24.848 10:22:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.848 10:22:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:24.848 10:22:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.849 10:22:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:24.849 00:22:24.849 real 0m33.709s 00:22:24.849 user 2m11.762s 00:22:24.849 sys 0m4.658s 00:22:24.849 10:22:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:24.849 10:22:44 -- common/autotest_common.sh@10 -- # set +x 00:22:24.849 ************************************ 00:22:24.849 END TEST nvmf_failover 00:22:24.849 ************************************ 00:22:24.849 10:22:44 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:24.849 10:22:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:24.849 10:22:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:24.849 10:22:44 -- common/autotest_common.sh@10 -- # set +x 00:22:24.849 ************************************ 00:22:24.849 START TEST nvmf_discovery 00:22:24.849 ************************************ 00:22:24.849 10:22:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:24.849 * Looking for test storage... 00:22:24.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:24.849 10:22:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:24.849 10:22:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:24.849 10:22:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:25.107 10:22:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:25.107 10:22:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:25.107 10:22:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:25.107 10:22:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:25.107 10:22:44 -- scripts/common.sh@335 -- # IFS=.-: 00:22:25.107 10:22:44 -- scripts/common.sh@335 -- # read -ra ver1 00:22:25.107 10:22:44 -- scripts/common.sh@336 -- # IFS=.-: 00:22:25.107 10:22:44 -- scripts/common.sh@336 -- # read -ra ver2 00:22:25.107 10:22:44 -- scripts/common.sh@337 -- # local 'op=<' 00:22:25.107 10:22:44 -- scripts/common.sh@339 -- # ver1_l=2 00:22:25.107 10:22:44 -- scripts/common.sh@340 -- # ver2_l=1 00:22:25.107 10:22:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:25.107 10:22:44 -- scripts/common.sh@343 -- # case "$op" in 00:22:25.107 10:22:44 -- scripts/common.sh@344 -- # : 1 00:22:25.107 10:22:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:25.107 10:22:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:25.107 10:22:44 -- scripts/common.sh@364 -- # decimal 1 00:22:25.107 10:22:44 -- scripts/common.sh@352 -- # local d=1 00:22:25.107 10:22:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:25.107 10:22:44 -- scripts/common.sh@354 -- # echo 1 00:22:25.108 10:22:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:25.108 10:22:44 -- scripts/common.sh@365 -- # decimal 2 00:22:25.108 10:22:44 -- scripts/common.sh@352 -- # local d=2 00:22:25.108 10:22:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:25.108 10:22:44 -- scripts/common.sh@354 -- # echo 2 00:22:25.108 10:22:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:25.108 10:22:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:25.108 10:22:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:25.108 10:22:44 -- scripts/common.sh@367 -- # return 0 00:22:25.108 10:22:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:25.108 10:22:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:25.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.108 --rc genhtml_branch_coverage=1 00:22:25.108 --rc genhtml_function_coverage=1 00:22:25.108 --rc genhtml_legend=1 00:22:25.108 --rc geninfo_all_blocks=1 00:22:25.108 --rc geninfo_unexecuted_blocks=1 00:22:25.108 00:22:25.108 ' 00:22:25.108 10:22:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:25.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.108 --rc genhtml_branch_coverage=1 00:22:25.108 --rc genhtml_function_coverage=1 00:22:25.108 --rc genhtml_legend=1 00:22:25.108 --rc geninfo_all_blocks=1 00:22:25.108 --rc geninfo_unexecuted_blocks=1 00:22:25.108 00:22:25.108 ' 00:22:25.108 10:22:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:25.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.108 --rc genhtml_branch_coverage=1 00:22:25.108 --rc genhtml_function_coverage=1 00:22:25.108 --rc genhtml_legend=1 00:22:25.108 --rc geninfo_all_blocks=1 00:22:25.108 --rc geninfo_unexecuted_blocks=1 00:22:25.108 00:22:25.108 ' 00:22:25.108 10:22:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:25.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:25.108 --rc genhtml_branch_coverage=1 00:22:25.108 --rc genhtml_function_coverage=1 00:22:25.108 --rc genhtml_legend=1 00:22:25.108 --rc geninfo_all_blocks=1 00:22:25.108 --rc geninfo_unexecuted_blocks=1 00:22:25.108 00:22:25.108 ' 00:22:25.108 10:22:44 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:25.108 10:22:44 -- nvmf/common.sh@7 -- # uname -s 00:22:25.108 10:22:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.108 10:22:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.108 10:22:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.108 10:22:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.108 10:22:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.108 10:22:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.108 10:22:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.108 10:22:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.108 10:22:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.108 10:22:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.108 10:22:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:22:25.108 10:22:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:22:25.108 10:22:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.108 10:22:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.108 10:22:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:25.108 10:22:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:25.108 10:22:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.108 10:22:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.108 10:22:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.108 10:22:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.108 10:22:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.108 10:22:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.108 10:22:44 -- paths/export.sh@5 -- # export PATH 00:22:25.108 10:22:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.108 10:22:44 -- nvmf/common.sh@46 -- # : 0 00:22:25.108 10:22:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:25.108 10:22:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:25.108 10:22:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:25.108 10:22:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.108 10:22:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.108 10:22:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:25.108 10:22:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:25.108 10:22:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:25.108 10:22:44 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:25.108 10:22:44 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:25.108 10:22:44 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:25.108 10:22:44 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:25.108 10:22:44 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:25.108 10:22:44 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:25.108 10:22:44 -- host/discovery.sh@25 -- # nvmftestinit 00:22:25.108 10:22:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:25.108 10:22:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.108 10:22:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:25.108 10:22:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:25.108 10:22:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:25.108 10:22:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.108 10:22:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.108 10:22:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.108 10:22:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:25.108 10:22:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:25.108 10:22:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:25.108 10:22:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:25.108 10:22:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:25.108 10:22:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:25.108 10:22:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.108 10:22:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.108 10:22:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:25.108 10:22:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:25.108 10:22:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:25.108 10:22:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:25.108 10:22:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:25.108 10:22:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.108 10:22:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:25.108 10:22:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:25.108 10:22:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:25.108 10:22:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:25.108 10:22:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:25.108 10:22:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:25.108 Cannot find device "nvmf_tgt_br" 00:22:25.108 10:22:44 -- nvmf/common.sh@154 -- # true 00:22:25.108 10:22:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:25.108 Cannot find device "nvmf_tgt_br2" 00:22:25.108 10:22:44 -- nvmf/common.sh@155 -- # true 00:22:25.108 10:22:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:25.108 10:22:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:25.108 Cannot find device "nvmf_tgt_br" 00:22:25.108 10:22:44 -- nvmf/common.sh@157 -- # true 00:22:25.108 10:22:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:25.108 Cannot find device "nvmf_tgt_br2" 00:22:25.108 10:22:44 -- nvmf/common.sh@158 -- # true 00:22:25.108 10:22:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:25.108 10:22:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:25.367 10:22:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:25.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:25.367 10:22:44 -- nvmf/common.sh@161 -- # true 00:22:25.367 10:22:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:25.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:25.367 10:22:44 -- nvmf/common.sh@162 -- # true 00:22:25.367 10:22:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:25.367 10:22:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:25.367 10:22:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:25.367 10:22:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:25.367 10:22:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:25.367 10:22:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:25.367 10:22:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:25.367 10:22:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:25.367 10:22:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:25.367 10:22:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:25.367 10:22:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:25.367 10:22:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:25.367 10:22:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:25.367 10:22:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:25.367 10:22:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:25.367 10:22:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:25.367 10:22:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:25.367 10:22:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:25.367 10:22:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:25.367 10:22:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:25.367 10:22:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:25.367 10:22:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:25.367 10:22:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:25.367 10:22:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:25.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:22:25.367 00:22:25.367 --- 10.0.0.2 ping statistics --- 00:22:25.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.367 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:22:25.367 10:22:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:25.367 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:25.367 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:22:25.367 00:22:25.367 --- 10.0.0.3 ping statistics --- 00:22:25.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.367 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:25.367 10:22:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:25.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:22:25.367 00:22:25.367 --- 10.0.0.1 ping statistics --- 00:22:25.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.367 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:22:25.367 10:22:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.367 10:22:44 -- nvmf/common.sh@421 -- # return 0 00:22:25.367 10:22:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:25.367 10:22:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.367 10:22:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:25.367 10:22:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:25.367 10:22:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.367 10:22:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:25.367 10:22:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:25.367 10:22:44 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:25.367 10:22:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:25.367 10:22:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:25.367 10:22:44 -- common/autotest_common.sh@10 -- # set +x 00:22:25.367 10:22:44 -- nvmf/common.sh@469 -- # nvmfpid=95734 00:22:25.367 10:22:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:25.367 10:22:44 -- nvmf/common.sh@470 -- # waitforlisten 95734 00:22:25.367 10:22:44 -- common/autotest_common.sh@829 -- # '[' -z 95734 ']' 00:22:25.367 10:22:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.367 10:22:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.367 10:22:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.367 10:22:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.367 10:22:44 -- common/autotest_common.sh@10 -- # set +x 00:22:25.625 [2024-11-19 10:22:44.930786] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:25.625 [2024-11-19 10:22:44.930895] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.625 [2024-11-19 10:22:45.068162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.625 [2024-11-19 10:22:45.106812] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:25.625 [2024-11-19 10:22:45.107015] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.625 [2024-11-19 10:22:45.107038] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.625 [2024-11-19 10:22:45.107055] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.625 [2024-11-19 10:22:45.107102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.883 10:22:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:25.883 10:22:45 -- common/autotest_common.sh@862 -- # return 0 00:22:25.883 10:22:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:25.883 10:22:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:25.883 10:22:45 -- common/autotest_common.sh@10 -- # set +x 00:22:25.883 10:22:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.883 10:22:45 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:25.883 10:22:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.883 10:22:45 -- common/autotest_common.sh@10 -- # set +x 00:22:25.883 [2024-11-19 10:22:45.237783] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:25.883 10:22:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.883 10:22:45 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:25.883 10:22:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.883 10:22:45 -- common/autotest_common.sh@10 -- # set +x 00:22:25.883 [2024-11-19 10:22:45.246459] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:25.883 10:22:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.883 10:22:45 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:25.883 10:22:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.883 10:22:45 -- common/autotest_common.sh@10 -- # set +x 00:22:25.883 null0 00:22:25.883 10:22:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.883 10:22:45 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:25.883 10:22:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.883 10:22:45 -- common/autotest_common.sh@10 -- # set +x 00:22:25.883 null1 00:22:25.883 10:22:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.883 10:22:45 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:25.884 10:22:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.884 10:22:45 -- common/autotest_common.sh@10 -- # set +x 00:22:25.884 10:22:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.884 10:22:45 -- host/discovery.sh@45 -- # hostpid=95769 00:22:25.884 10:22:45 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:25.884 10:22:45 -- host/discovery.sh@46 -- # waitforlisten 95769 /tmp/host.sock 00:22:25.884 10:22:45 -- common/autotest_common.sh@829 -- # '[' -z 95769 ']' 00:22:25.884 10:22:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:25.884 10:22:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:25.884 10:22:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:25.884 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:25.884 10:22:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:25.884 10:22:45 -- common/autotest_common.sh@10 -- # set +x 00:22:25.884 [2024-11-19 10:22:45.331482] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:25.884 [2024-11-19 10:22:45.331579] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95769 ] 00:22:26.142 [2024-11-19 10:22:45.471355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.142 [2024-11-19 10:22:45.510512] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:26.142 [2024-11-19 10:22:45.510702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.077 10:22:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:27.077 10:22:46 -- common/autotest_common.sh@862 -- # return 0 00:22:27.077 10:22:46 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:27.077 10:22:46 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:27.077 10:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.077 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:22:27.077 10:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.077 10:22:46 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:27.077 10:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.077 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:22:27.077 10:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.077 10:22:46 -- host/discovery.sh@72 -- # notify_id=0 00:22:27.077 10:22:46 -- host/discovery.sh@78 -- # get_subsystem_names 00:22:27.077 10:22:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:27.077 10:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.077 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:22:27.077 10:22:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:27.077 10:22:46 -- host/discovery.sh@59 -- # sort 00:22:27.077 10:22:46 -- host/discovery.sh@59 -- # xargs 00:22:27.077 10:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.077 10:22:46 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:22:27.077 10:22:46 -- host/discovery.sh@79 -- # get_bdev_list 00:22:27.077 10:22:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.077 10:22:46 -- host/discovery.sh@55 -- # xargs 00:22:27.077 10:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.077 10:22:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:27.077 10:22:46 -- host/discovery.sh@55 -- # sort 00:22:27.077 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:22:27.077 10:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.077 10:22:46 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:22:27.077 10:22:46 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:27.077 10:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.077 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:22:27.077 10:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.077 10:22:46 -- host/discovery.sh@82 -- # get_subsystem_names 00:22:27.077 10:22:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:27.077 10:22:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:27.077 10:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.077 10:22:46 -- host/discovery.sh@59 -- # sort 00:22:27.077 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:22:27.077 10:22:46 -- host/discovery.sh@59 -- # xargs 00:22:27.077 10:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.077 10:22:46 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:22:27.077 10:22:46 -- host/discovery.sh@83 -- # get_bdev_list 00:22:27.077 10:22:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.077 10:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.077 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:22:27.077 10:22:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:27.077 10:22:46 -- host/discovery.sh@55 -- # xargs 00:22:27.077 10:22:46 -- host/discovery.sh@55 -- # sort 00:22:27.077 10:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.336 10:22:46 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:27.336 10:22:46 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:27.336 10:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.336 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:22:27.336 10:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.336 10:22:46 -- host/discovery.sh@86 -- # get_subsystem_names 00:22:27.336 10:22:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:27.336 10:22:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:27.336 10:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.336 10:22:46 -- host/discovery.sh@59 -- # sort 00:22:27.336 10:22:46 -- host/discovery.sh@59 -- # xargs 00:22:27.336 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:22:27.336 10:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.336 10:22:46 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:22:27.336 10:22:46 -- host/discovery.sh@87 -- # get_bdev_list 00:22:27.336 10:22:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:27.336 10:22:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.336 10:22:46 -- host/discovery.sh@55 -- # sort 00:22:27.336 10:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.336 10:22:46 -- host/discovery.sh@55 -- # xargs 00:22:27.336 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:22:27.336 10:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.336 10:22:46 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:27.336 10:22:46 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:27.336 10:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.336 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:22:27.336 [2024-11-19 10:22:46.758779] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.336 10:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.336 10:22:46 -- host/discovery.sh@92 -- # get_subsystem_names 00:22:27.336 10:22:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:27.336 10:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.336 10:22:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:27.336 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:22:27.336 10:22:46 -- host/discovery.sh@59 -- # sort 00:22:27.336 10:22:46 -- host/discovery.sh@59 -- # xargs 00:22:27.336 10:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.336 10:22:46 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:27.336 10:22:46 -- host/discovery.sh@93 -- # get_bdev_list 00:22:27.336 10:22:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:27.336 10:22:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.336 10:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.336 10:22:46 -- host/discovery.sh@55 -- # sort 00:22:27.336 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:22:27.336 10:22:46 -- host/discovery.sh@55 -- # xargs 00:22:27.336 10:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.336 10:22:46 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:22:27.336 10:22:46 -- host/discovery.sh@94 -- # get_notification_count 00:22:27.336 10:22:46 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:27.336 10:22:46 -- host/discovery.sh@74 -- # jq '. | length' 00:22:27.336 10:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.336 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:22:27.595 10:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.595 10:22:46 -- host/discovery.sh@74 -- # notification_count=0 00:22:27.595 10:22:46 -- host/discovery.sh@75 -- # notify_id=0 00:22:27.595 10:22:46 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:22:27.595 10:22:46 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:27.595 10:22:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.595 10:22:46 -- common/autotest_common.sh@10 -- # set +x 00:22:27.595 10:22:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.595 10:22:46 -- host/discovery.sh@100 -- # sleep 1 00:22:28.163 [2024-11-19 10:22:47.411027] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:28.163 [2024-11-19 10:22:47.411072] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:28.163 [2024-11-19 10:22:47.411092] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:28.163 [2024-11-19 10:22:47.497148] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:28.163 [2024-11-19 10:22:47.552947] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:28.163 [2024-11-19 10:22:47.552985] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:28.421 10:22:47 -- host/discovery.sh@101 -- # get_subsystem_names 00:22:28.421 10:22:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:28.421 10:22:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:28.421 10:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.421 10:22:47 -- common/autotest_common.sh@10 -- # set +x 00:22:28.421 10:22:47 -- host/discovery.sh@59 -- # sort 00:22:28.421 10:22:47 -- host/discovery.sh@59 -- # xargs 00:22:28.421 10:22:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.679 10:22:47 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.680 10:22:47 -- host/discovery.sh@102 -- # get_bdev_list 00:22:28.680 10:22:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:28.680 10:22:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:28.680 10:22:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.680 10:22:47 -- host/discovery.sh@55 -- # sort 00:22:28.680 10:22:47 -- common/autotest_common.sh@10 -- # set +x 00:22:28.680 10:22:47 -- host/discovery.sh@55 -- # xargs 00:22:28.680 10:22:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.680 10:22:48 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:28.680 10:22:48 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:22:28.680 10:22:48 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:28.680 10:22:48 -- host/discovery.sh@63 -- # sort -n 00:22:28.680 10:22:48 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:28.680 10:22:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.680 10:22:48 -- common/autotest_common.sh@10 -- # set +x 00:22:28.680 10:22:48 -- host/discovery.sh@63 -- # xargs 00:22:28.680 10:22:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.680 10:22:48 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:22:28.680 10:22:48 -- host/discovery.sh@104 -- # get_notification_count 00:22:28.680 10:22:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:28.680 10:22:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.680 10:22:48 -- host/discovery.sh@74 -- # jq '. | length' 00:22:28.680 10:22:48 -- common/autotest_common.sh@10 -- # set +x 00:22:28.680 10:22:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.680 10:22:48 -- host/discovery.sh@74 -- # notification_count=1 00:22:28.680 10:22:48 -- host/discovery.sh@75 -- # notify_id=1 00:22:28.680 10:22:48 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:22:28.680 10:22:48 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:28.680 10:22:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.680 10:22:48 -- common/autotest_common.sh@10 -- # set +x 00:22:28.680 10:22:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.680 10:22:48 -- host/discovery.sh@109 -- # sleep 1 00:22:30.056 10:22:49 -- host/discovery.sh@110 -- # get_bdev_list 00:22:30.056 10:22:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.056 10:22:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:30.056 10:22:49 -- host/discovery.sh@55 -- # sort 00:22:30.056 10:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.056 10:22:49 -- common/autotest_common.sh@10 -- # set +x 00:22:30.056 10:22:49 -- host/discovery.sh@55 -- # xargs 00:22:30.056 10:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.056 10:22:49 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:30.056 10:22:49 -- host/discovery.sh@111 -- # get_notification_count 00:22:30.056 10:22:49 -- host/discovery.sh@74 -- # jq '. | length' 00:22:30.056 10:22:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:30.056 10:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.056 10:22:49 -- common/autotest_common.sh@10 -- # set +x 00:22:30.056 10:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.056 10:22:49 -- host/discovery.sh@74 -- # notification_count=1 00:22:30.056 10:22:49 -- host/discovery.sh@75 -- # notify_id=2 00:22:30.056 10:22:49 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:22:30.056 10:22:49 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:30.056 10:22:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.056 10:22:49 -- common/autotest_common.sh@10 -- # set +x 00:22:30.056 [2024-11-19 10:22:49.288408] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:30.056 [2024-11-19 10:22:49.289597] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:30.056 [2024-11-19 10:22:49.289646] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:30.056 10:22:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.056 10:22:49 -- host/discovery.sh@117 -- # sleep 1 00:22:30.056 [2024-11-19 10:22:49.375658] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:30.056 [2024-11-19 10:22:49.433012] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:30.056 [2024-11-19 10:22:49.433061] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:30.056 [2024-11-19 10:22:49.433069] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:30.992 10:22:50 -- host/discovery.sh@118 -- # get_subsystem_names 00:22:30.992 10:22:50 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:30.992 10:22:50 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:30.992 10:22:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.992 10:22:50 -- common/autotest_common.sh@10 -- # set +x 00:22:30.992 10:22:50 -- host/discovery.sh@59 -- # sort 00:22:30.992 10:22:50 -- host/discovery.sh@59 -- # xargs 00:22:30.992 10:22:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.992 10:22:50 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.992 10:22:50 -- host/discovery.sh@119 -- # get_bdev_list 00:22:30.992 10:22:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.992 10:22:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.992 10:22:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:30.992 10:22:50 -- common/autotest_common.sh@10 -- # set +x 00:22:30.992 10:22:50 -- host/discovery.sh@55 -- # sort 00:22:30.992 10:22:50 -- host/discovery.sh@55 -- # xargs 00:22:30.992 10:22:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.992 10:22:50 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:30.992 10:22:50 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:22:30.992 10:22:50 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:30.992 10:22:50 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:30.992 10:22:50 -- host/discovery.sh@63 -- # sort -n 00:22:30.992 10:22:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.992 10:22:50 -- host/discovery.sh@63 -- # xargs 00:22:30.992 10:22:50 -- common/autotest_common.sh@10 -- # set +x 00:22:30.993 10:22:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.993 10:22:50 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:30.993 10:22:50 -- host/discovery.sh@121 -- # get_notification_count 00:22:30.993 10:22:50 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:30.993 10:22:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.993 10:22:50 -- common/autotest_common.sh@10 -- # set +x 00:22:30.993 10:22:50 -- host/discovery.sh@74 -- # jq '. | length' 00:22:30.993 10:22:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.993 10:22:50 -- host/discovery.sh@74 -- # notification_count=0 00:22:30.993 10:22:50 -- host/discovery.sh@75 -- # notify_id=2 00:22:30.993 10:22:50 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:22:30.993 10:22:50 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:30.993 10:22:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.993 10:22:50 -- common/autotest_common.sh@10 -- # set +x 00:22:30.993 [2024-11-19 10:22:50.533838] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:30.993 [2024-11-19 10:22:50.533880] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:31.255 10:22:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.255 10:22:50 -- host/discovery.sh@127 -- # sleep 1 00:22:31.255 [2024-11-19 10:22:50.540793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.255 [2024-11-19 10:22:50.540841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.255 [2024-11-19 10:22:50.540856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.255 [2024-11-19 10:22:50.540866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.255 [2024-11-19 10:22:50.540876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.255 [2024-11-19 10:22:50.540886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.255 [2024-11-19 10:22:50.540896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:31.255 [2024-11-19 10:22:50.540905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.255 [2024-11-19 10:22:50.540918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4cf0 is same with the state(5) to be set 00:22:31.255 [2024-11-19 10:22:50.550743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f4cf0 (9): Bad file descriptor 00:22:31.255 [2024-11-19 10:22:50.560768] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:31.255 [2024-11-19 10:22:50.560916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.255 [2024-11-19 10:22:50.560976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.255 [2024-11-19 10:22:50.560995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf0 with addr=10.0.0.2, port=4420 00:22:31.255 [2024-11-19 10:22:50.561012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4cf0 is same with the state(5) to be set 00:22:31.255 [2024-11-19 10:22:50.561042] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f4cf0 (9): Bad file descriptor 00:22:31.255 [2024-11-19 10:22:50.561090] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:31.255 [2024-11-19 10:22:50.561103] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:31.255 [2024-11-19 10:22:50.561115] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:31.255 [2024-11-19 10:22:50.561133] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.255 [2024-11-19 10:22:50.570852] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:31.255 [2024-11-19 10:22:50.570946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.255 [2024-11-19 10:22:50.571006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.255 [2024-11-19 10:22:50.571023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf0 with addr=10.0.0.2, port=4420 00:22:31.255 [2024-11-19 10:22:50.571035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4cf0 is same with the state(5) to be set 00:22:31.255 [2024-11-19 10:22:50.571053] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f4cf0 (9): Bad file descriptor 00:22:31.255 [2024-11-19 10:22:50.571068] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:31.255 [2024-11-19 10:22:50.571077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:31.255 [2024-11-19 10:22:50.571088] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:31.255 [2024-11-19 10:22:50.571125] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.255 [2024-11-19 10:22:50.580913] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:31.255 [2024-11-19 10:22:50.581012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.255 [2024-11-19 10:22:50.581063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.255 [2024-11-19 10:22:50.581080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf0 with addr=10.0.0.2, port=4420 00:22:31.255 [2024-11-19 10:22:50.581091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4cf0 is same with the state(5) to be set 00:22:31.255 [2024-11-19 10:22:50.581108] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f4cf0 (9): Bad file descriptor 00:22:31.255 [2024-11-19 10:22:50.581158] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:31.255 [2024-11-19 10:22:50.581179] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:31.255 [2024-11-19 10:22:50.581189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:31.255 [2024-11-19 10:22:50.581206] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.255 [2024-11-19 10:22:50.590975] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:31.255 [2024-11-19 10:22:50.591077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.255 [2024-11-19 10:22:50.591125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.255 [2024-11-19 10:22:50.591142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf0 with addr=10.0.0.2, port=4420 00:22:31.255 [2024-11-19 10:22:50.591153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4cf0 is same with the state(5) to be set 00:22:31.255 [2024-11-19 10:22:50.591171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f4cf0 (9): Bad file descriptor 00:22:31.255 [2024-11-19 10:22:50.591197] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:31.255 [2024-11-19 10:22:50.591208] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:31.255 [2024-11-19 10:22:50.591217] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:31.255 [2024-11-19 10:22:50.591233] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.255 [2024-11-19 10:22:50.601044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:31.255 [2024-11-19 10:22:50.601136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.255 [2024-11-19 10:22:50.601183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.255 [2024-11-19 10:22:50.601200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf0 with addr=10.0.0.2, port=4420 00:22:31.255 [2024-11-19 10:22:50.601211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4cf0 is same with the state(5) to be set 00:22:31.256 [2024-11-19 10:22:50.601228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f4cf0 (9): Bad file descriptor 00:22:31.256 [2024-11-19 10:22:50.601264] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:31.256 [2024-11-19 10:22:50.601283] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:31.256 [2024-11-19 10:22:50.601299] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:31.256 [2024-11-19 10:22:50.601322] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.256 [2024-11-19 10:22:50.611102] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:31.256 [2024-11-19 10:22:50.611190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.256 [2024-11-19 10:22:50.611236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.256 [2024-11-19 10:22:50.611252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4cf0 with addr=10.0.0.2, port=4420 00:22:31.256 [2024-11-19 10:22:50.611263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4cf0 is same with the state(5) to be set 00:22:31.256 [2024-11-19 10:22:50.611280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f4cf0 (9): Bad file descriptor 00:22:31.256 [2024-11-19 10:22:50.611306] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:31.256 [2024-11-19 10:22:50.611321] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:31.256 [2024-11-19 10:22:50.611337] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:31.256 [2024-11-19 10:22:50.611360] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:31.256 [2024-11-19 10:22:50.620320] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:31.256 [2024-11-19 10:22:50.620354] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:32.263 10:22:51 -- host/discovery.sh@128 -- # get_subsystem_names 00:22:32.263 10:22:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:32.263 10:22:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:32.263 10:22:51 -- host/discovery.sh@59 -- # xargs 00:22:32.263 10:22:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.263 10:22:51 -- host/discovery.sh@59 -- # sort 00:22:32.263 10:22:51 -- common/autotest_common.sh@10 -- # set +x 00:22:32.263 10:22:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.263 10:22:51 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.264 10:22:51 -- host/discovery.sh@129 -- # get_bdev_list 00:22:32.264 10:22:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:32.264 10:22:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.264 10:22:51 -- host/discovery.sh@55 -- # sort 00:22:32.264 10:22:51 -- host/discovery.sh@55 -- # xargs 00:22:32.264 10:22:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.264 10:22:51 -- common/autotest_common.sh@10 -- # set +x 00:22:32.264 10:22:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.264 10:22:51 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:32.264 10:22:51 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:22:32.264 10:22:51 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:32.264 10:22:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.264 10:22:51 -- common/autotest_common.sh@10 -- # set +x 00:22:32.264 10:22:51 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:32.264 10:22:51 -- host/discovery.sh@63 -- # xargs 00:22:32.264 10:22:51 -- host/discovery.sh@63 -- # sort -n 00:22:32.264 10:22:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.264 10:22:51 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:22:32.264 10:22:51 -- host/discovery.sh@131 -- # get_notification_count 00:22:32.264 10:22:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:32.264 10:22:51 -- host/discovery.sh@74 -- # jq '. | length' 00:22:32.264 10:22:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.264 10:22:51 -- common/autotest_common.sh@10 -- # set +x 00:22:32.264 10:22:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.264 10:22:51 -- host/discovery.sh@74 -- # notification_count=0 00:22:32.264 10:22:51 -- host/discovery.sh@75 -- # notify_id=2 00:22:32.264 10:22:51 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:22:32.264 10:22:51 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:32.264 10:22:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.264 10:22:51 -- common/autotest_common.sh@10 -- # set +x 00:22:32.264 10:22:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.264 10:22:51 -- host/discovery.sh@135 -- # sleep 1 00:22:33.641 10:22:52 -- host/discovery.sh@136 -- # get_subsystem_names 00:22:33.641 10:22:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:33.641 10:22:52 -- host/discovery.sh@59 -- # sort 00:22:33.641 10:22:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.641 10:22:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:33.641 10:22:52 -- common/autotest_common.sh@10 -- # set +x 00:22:33.641 10:22:52 -- host/discovery.sh@59 -- # xargs 00:22:33.641 10:22:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.641 10:22:52 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:22:33.641 10:22:52 -- host/discovery.sh@137 -- # get_bdev_list 00:22:33.641 10:22:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:33.641 10:22:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:33.641 10:22:52 -- host/discovery.sh@55 -- # xargs 00:22:33.641 10:22:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.641 10:22:52 -- host/discovery.sh@55 -- # sort 00:22:33.641 10:22:52 -- common/autotest_common.sh@10 -- # set +x 00:22:33.641 10:22:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.641 10:22:52 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:22:33.641 10:22:52 -- host/discovery.sh@138 -- # get_notification_count 00:22:33.641 10:22:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:33.641 10:22:52 -- host/discovery.sh@74 -- # jq '. | length' 00:22:33.641 10:22:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.641 10:22:52 -- common/autotest_common.sh@10 -- # set +x 00:22:33.641 10:22:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.641 10:22:52 -- host/discovery.sh@74 -- # notification_count=2 00:22:33.641 10:22:52 -- host/discovery.sh@75 -- # notify_id=4 00:22:33.641 10:22:52 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:22:33.641 10:22:52 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:33.641 10:22:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.641 10:22:52 -- common/autotest_common.sh@10 -- # set +x 00:22:34.577 [2024-11-19 10:22:53.962235] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:34.577 [2024-11-19 10:22:53.962420] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:34.577 [2024-11-19 10:22:53.962456] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:34.577 [2024-11-19 10:22:54.048366] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:34.577 [2024-11-19 10:22:54.107619] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:34.577 [2024-11-19 10:22:54.107668] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:34.577 10:22:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.577 10:22:54 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:34.577 10:22:54 -- common/autotest_common.sh@650 -- # local es=0 00:22:34.577 10:22:54 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:34.577 10:22:54 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:34.577 10:22:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.577 10:22:54 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:34.577 10:22:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.577 10:22:54 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:34.577 10:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.577 10:22:54 -- common/autotest_common.sh@10 -- # set +x 00:22:34.577 2024/11/19 10:22:54 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:34.836 request: 00:22:34.836 { 00:22:34.836 "method": "bdev_nvme_start_discovery", 00:22:34.836 "params": { 00:22:34.836 "name": "nvme", 00:22:34.836 "trtype": "tcp", 00:22:34.836 "traddr": "10.0.0.2", 00:22:34.836 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:34.836 "adrfam": "ipv4", 00:22:34.836 "trsvcid": "8009", 00:22:34.836 "wait_for_attach": true 00:22:34.836 } 00:22:34.836 } 00:22:34.836 Got JSON-RPC error response 00:22:34.836 GoRPCClient: error on JSON-RPC call 00:22:34.836 10:22:54 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:34.836 10:22:54 -- common/autotest_common.sh@653 -- # es=1 00:22:34.836 10:22:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:34.836 10:22:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:34.836 10:22:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:34.836 10:22:54 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:22:34.836 10:22:54 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:34.836 10:22:54 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:34.836 10:22:54 -- host/discovery.sh@67 -- # sort 00:22:34.836 10:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.836 10:22:54 -- host/discovery.sh@67 -- # xargs 00:22:34.836 10:22:54 -- common/autotest_common.sh@10 -- # set +x 00:22:34.836 10:22:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.836 10:22:54 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:22:34.836 10:22:54 -- host/discovery.sh@147 -- # get_bdev_list 00:22:34.836 10:22:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.836 10:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.836 10:22:54 -- host/discovery.sh@55 -- # xargs 00:22:34.836 10:22:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:34.836 10:22:54 -- host/discovery.sh@55 -- # sort 00:22:34.836 10:22:54 -- common/autotest_common.sh@10 -- # set +x 00:22:34.836 10:22:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.836 10:22:54 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:34.836 10:22:54 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:34.836 10:22:54 -- common/autotest_common.sh@650 -- # local es=0 00:22:34.836 10:22:54 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:34.836 10:22:54 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:34.836 10:22:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.836 10:22:54 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:34.836 10:22:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.836 10:22:54 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:34.836 10:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.836 10:22:54 -- common/autotest_common.sh@10 -- # set +x 00:22:34.836 2024/11/19 10:22:54 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:34.836 request: 00:22:34.836 { 00:22:34.836 "method": "bdev_nvme_start_discovery", 00:22:34.836 "params": { 00:22:34.836 "name": "nvme_second", 00:22:34.836 "trtype": "tcp", 00:22:34.836 "traddr": "10.0.0.2", 00:22:34.836 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:34.836 "adrfam": "ipv4", 00:22:34.836 "trsvcid": "8009", 00:22:34.836 "wait_for_attach": true 00:22:34.836 } 00:22:34.836 } 00:22:34.836 Got JSON-RPC error response 00:22:34.836 GoRPCClient: error on JSON-RPC call 00:22:34.836 10:22:54 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:34.836 10:22:54 -- common/autotest_common.sh@653 -- # es=1 00:22:34.836 10:22:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:34.836 10:22:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:34.836 10:22:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:34.836 10:22:54 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:22:34.836 10:22:54 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:34.836 10:22:54 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:34.836 10:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.836 10:22:54 -- host/discovery.sh@67 -- # sort 00:22:34.836 10:22:54 -- host/discovery.sh@67 -- # xargs 00:22:34.836 10:22:54 -- common/autotest_common.sh@10 -- # set +x 00:22:34.836 10:22:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.836 10:22:54 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:22:34.836 10:22:54 -- host/discovery.sh@153 -- # get_bdev_list 00:22:34.836 10:22:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.836 10:22:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:34.836 10:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.836 10:22:54 -- common/autotest_common.sh@10 -- # set +x 00:22:34.836 10:22:54 -- host/discovery.sh@55 -- # sort 00:22:34.836 10:22:54 -- host/discovery.sh@55 -- # xargs 00:22:34.836 10:22:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.836 10:22:54 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:34.836 10:22:54 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:34.836 10:22:54 -- common/autotest_common.sh@650 -- # local es=0 00:22:34.836 10:22:54 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:34.836 10:22:54 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:34.836 10:22:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.836 10:22:54 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:34.836 10:22:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:34.836 10:22:54 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:34.836 10:22:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.836 10:22:54 -- common/autotest_common.sh@10 -- # set +x 00:22:36.213 [2024-11-19 10:22:55.361701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.213 [2024-11-19 10:22:55.361808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:36.213 [2024-11-19 10:22:55.361852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x832d50 with addr=10.0.0.2, port=8010 00:22:36.213 [2024-11-19 10:22:55.361874] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:36.213 [2024-11-19 10:22:55.361885] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:36.213 [2024-11-19 10:22:55.361894] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:37.149 [2024-11-19 10:22:56.361689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.149 [2024-11-19 10:22:56.361800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:37.149 [2024-11-19 10:22:56.361841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x832d50 with addr=10.0.0.2, port=8010 00:22:37.149 [2024-11-19 10:22:56.361864] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:37.149 [2024-11-19 10:22:56.361875] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:37.149 [2024-11-19 10:22:56.361885] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:38.091 [2024-11-19 10:22:57.361540] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:38.091 2024/11/19 10:22:57 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:38.091 request: 00:22:38.091 { 00:22:38.091 "method": "bdev_nvme_start_discovery", 00:22:38.091 "params": { 00:22:38.091 "name": "nvme_second", 00:22:38.091 "trtype": "tcp", 00:22:38.091 "traddr": "10.0.0.2", 00:22:38.091 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:38.091 "adrfam": "ipv4", 00:22:38.091 "trsvcid": "8010", 00:22:38.091 "attach_timeout_ms": 3000 00:22:38.091 } 00:22:38.091 } 00:22:38.091 Got JSON-RPC error response 00:22:38.091 GoRPCClient: error on JSON-RPC call 00:22:38.091 10:22:57 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:38.091 10:22:57 -- common/autotest_common.sh@653 -- # es=1 00:22:38.091 10:22:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:38.091 10:22:57 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:38.091 10:22:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:38.091 10:22:57 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:22:38.091 10:22:57 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:38.091 10:22:57 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:38.091 10:22:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.091 10:22:57 -- host/discovery.sh@67 -- # sort 00:22:38.091 10:22:57 -- host/discovery.sh@67 -- # xargs 00:22:38.091 10:22:57 -- common/autotest_common.sh@10 -- # set +x 00:22:38.091 10:22:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.091 10:22:57 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:22:38.091 10:22:57 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:22:38.091 10:22:57 -- host/discovery.sh@162 -- # kill 95769 00:22:38.091 10:22:57 -- host/discovery.sh@163 -- # nvmftestfini 00:22:38.091 10:22:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:38.091 10:22:57 -- nvmf/common.sh@116 -- # sync 00:22:38.091 10:22:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:38.091 10:22:57 -- nvmf/common.sh@119 -- # set +e 00:22:38.091 10:22:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:38.091 10:22:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:38.091 rmmod nvme_tcp 00:22:38.091 rmmod nvme_fabrics 00:22:38.091 rmmod nvme_keyring 00:22:38.091 10:22:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:38.091 10:22:57 -- nvmf/common.sh@123 -- # set -e 00:22:38.091 10:22:57 -- nvmf/common.sh@124 -- # return 0 00:22:38.091 10:22:57 -- nvmf/common.sh@477 -- # '[' -n 95734 ']' 00:22:38.091 10:22:57 -- nvmf/common.sh@478 -- # killprocess 95734 00:22:38.091 10:22:57 -- common/autotest_common.sh@936 -- # '[' -z 95734 ']' 00:22:38.091 10:22:57 -- common/autotest_common.sh@940 -- # kill -0 95734 00:22:38.091 10:22:57 -- common/autotest_common.sh@941 -- # uname 00:22:38.091 10:22:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:38.091 10:22:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95734 00:22:38.091 10:22:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:38.091 10:22:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:38.091 killing process with pid 95734 00:22:38.091 10:22:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95734' 00:22:38.091 10:22:57 -- common/autotest_common.sh@955 -- # kill 95734 00:22:38.091 10:22:57 -- common/autotest_common.sh@960 -- # wait 95734 00:22:38.351 10:22:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:38.351 10:22:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:38.351 10:22:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:38.351 10:22:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:38.351 10:22:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:38.351 10:22:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.351 10:22:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.351 10:22:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.351 10:22:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:38.351 00:22:38.351 real 0m13.469s 00:22:38.351 user 0m27.040s 00:22:38.351 sys 0m1.475s 00:22:38.351 10:22:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:38.351 10:22:57 -- common/autotest_common.sh@10 -- # set +x 00:22:38.351 ************************************ 00:22:38.351 END TEST nvmf_discovery 00:22:38.351 ************************************ 00:22:38.351 10:22:57 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:38.351 10:22:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:38.351 10:22:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:38.351 10:22:57 -- common/autotest_common.sh@10 -- # set +x 00:22:38.351 ************************************ 00:22:38.351 START TEST nvmf_discovery_remove_ifc 00:22:38.351 ************************************ 00:22:38.351 10:22:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:38.351 * Looking for test storage... 00:22:38.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:38.663 10:22:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:38.663 10:22:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:38.663 10:22:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:38.663 10:22:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:38.663 10:22:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:38.663 10:22:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:38.663 10:22:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:38.663 10:22:58 -- scripts/common.sh@335 -- # IFS=.-: 00:22:38.663 10:22:58 -- scripts/common.sh@335 -- # read -ra ver1 00:22:38.663 10:22:58 -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.663 10:22:58 -- scripts/common.sh@336 -- # read -ra ver2 00:22:38.663 10:22:58 -- scripts/common.sh@337 -- # local 'op=<' 00:22:38.663 10:22:58 -- scripts/common.sh@339 -- # ver1_l=2 00:22:38.663 10:22:58 -- scripts/common.sh@340 -- # ver2_l=1 00:22:38.663 10:22:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:38.664 10:22:58 -- scripts/common.sh@343 -- # case "$op" in 00:22:38.664 10:22:58 -- scripts/common.sh@344 -- # : 1 00:22:38.664 10:22:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:38.664 10:22:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.664 10:22:58 -- scripts/common.sh@364 -- # decimal 1 00:22:38.664 10:22:58 -- scripts/common.sh@352 -- # local d=1 00:22:38.664 10:22:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:38.664 10:22:58 -- scripts/common.sh@354 -- # echo 1 00:22:38.664 10:22:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:38.664 10:22:58 -- scripts/common.sh@365 -- # decimal 2 00:22:38.664 10:22:58 -- scripts/common.sh@352 -- # local d=2 00:22:38.664 10:22:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:38.664 10:22:58 -- scripts/common.sh@354 -- # echo 2 00:22:38.664 10:22:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:38.664 10:22:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:38.664 10:22:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:38.664 10:22:58 -- scripts/common.sh@367 -- # return 0 00:22:38.664 10:22:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:38.664 10:22:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:38.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.664 --rc genhtml_branch_coverage=1 00:22:38.664 --rc genhtml_function_coverage=1 00:22:38.664 --rc genhtml_legend=1 00:22:38.664 --rc geninfo_all_blocks=1 00:22:38.664 --rc geninfo_unexecuted_blocks=1 00:22:38.664 00:22:38.664 ' 00:22:38.664 10:22:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:38.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.664 --rc genhtml_branch_coverage=1 00:22:38.664 --rc genhtml_function_coverage=1 00:22:38.664 --rc genhtml_legend=1 00:22:38.664 --rc geninfo_all_blocks=1 00:22:38.664 --rc geninfo_unexecuted_blocks=1 00:22:38.664 00:22:38.664 ' 00:22:38.664 10:22:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:38.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.664 --rc genhtml_branch_coverage=1 00:22:38.664 --rc genhtml_function_coverage=1 00:22:38.664 --rc genhtml_legend=1 00:22:38.664 --rc geninfo_all_blocks=1 00:22:38.664 --rc geninfo_unexecuted_blocks=1 00:22:38.664 00:22:38.664 ' 00:22:38.664 10:22:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:38.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.664 --rc genhtml_branch_coverage=1 00:22:38.664 --rc genhtml_function_coverage=1 00:22:38.664 --rc genhtml_legend=1 00:22:38.664 --rc geninfo_all_blocks=1 00:22:38.664 --rc geninfo_unexecuted_blocks=1 00:22:38.664 00:22:38.664 ' 00:22:38.664 10:22:58 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:38.664 10:22:58 -- nvmf/common.sh@7 -- # uname -s 00:22:38.664 10:22:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.664 10:22:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.664 10:22:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.664 10:22:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.664 10:22:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.664 10:22:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.664 10:22:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.664 10:22:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.664 10:22:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.664 10:22:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.664 10:22:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:22:38.664 10:22:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:22:38.664 10:22:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.664 10:22:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.664 10:22:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:38.664 10:22:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:38.664 10:22:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.664 10:22:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.664 10:22:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.664 10:22:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.664 10:22:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.664 10:22:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.664 10:22:58 -- paths/export.sh@5 -- # export PATH 00:22:38.664 10:22:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.664 10:22:58 -- nvmf/common.sh@46 -- # : 0 00:22:38.664 10:22:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:38.664 10:22:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:38.664 10:22:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:38.664 10:22:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.664 10:22:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.664 10:22:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:38.664 10:22:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:38.664 10:22:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:38.664 10:22:58 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:38.664 10:22:58 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:38.664 10:22:58 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:38.664 10:22:58 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:38.664 10:22:58 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:38.664 10:22:58 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:38.664 10:22:58 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:38.664 10:22:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:38.664 10:22:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.664 10:22:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:38.664 10:22:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:38.664 10:22:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:38.664 10:22:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.664 10:22:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.664 10:22:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.664 10:22:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:38.664 10:22:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:38.664 10:22:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:38.664 10:22:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:38.664 10:22:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:38.664 10:22:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:38.664 10:22:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.664 10:22:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.664 10:22:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:38.664 10:22:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:38.664 10:22:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:38.664 10:22:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:38.664 10:22:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:38.664 10:22:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.664 10:22:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:38.664 10:22:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:38.664 10:22:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:38.664 10:22:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:38.664 10:22:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:38.664 10:22:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:38.664 Cannot find device "nvmf_tgt_br" 00:22:38.664 10:22:58 -- nvmf/common.sh@154 -- # true 00:22:38.664 10:22:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:38.664 Cannot find device "nvmf_tgt_br2" 00:22:38.664 10:22:58 -- nvmf/common.sh@155 -- # true 00:22:38.664 10:22:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:38.664 10:22:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:38.664 Cannot find device "nvmf_tgt_br" 00:22:38.664 10:22:58 -- nvmf/common.sh@157 -- # true 00:22:38.664 10:22:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:38.664 Cannot find device "nvmf_tgt_br2" 00:22:38.664 10:22:58 -- nvmf/common.sh@158 -- # true 00:22:38.664 10:22:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:38.664 10:22:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:38.664 10:22:58 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:38.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:38.664 10:22:58 -- nvmf/common.sh@161 -- # true 00:22:38.664 10:22:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:38.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:38.665 10:22:58 -- nvmf/common.sh@162 -- # true 00:22:38.665 10:22:58 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:38.665 10:22:58 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:38.665 10:22:58 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:38.938 10:22:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:38.938 10:22:58 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:38.938 10:22:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:38.938 10:22:58 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:38.938 10:22:58 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:38.938 10:22:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:38.938 10:22:58 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:38.938 10:22:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:38.938 10:22:58 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:38.938 10:22:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:38.938 10:22:58 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:38.938 10:22:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:38.938 10:22:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:38.938 10:22:58 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:38.938 10:22:58 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:38.938 10:22:58 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:38.938 10:22:58 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:38.938 10:22:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:38.938 10:22:58 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:38.938 10:22:58 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:38.938 10:22:58 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:38.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:22:38.938 00:22:38.938 --- 10.0.0.2 ping statistics --- 00:22:38.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.938 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:22:38.939 10:22:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:38.939 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:38.939 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:22:38.939 00:22:38.939 --- 10.0.0.3 ping statistics --- 00:22:38.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.939 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:38.939 10:22:58 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:38.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:22:38.939 00:22:38.939 --- 10.0.0.1 ping statistics --- 00:22:38.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.939 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:22:38.939 10:22:58 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.939 10:22:58 -- nvmf/common.sh@421 -- # return 0 00:22:38.939 10:22:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:38.939 10:22:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.939 10:22:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:38.939 10:22:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:38.939 10:22:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.939 10:22:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:38.939 10:22:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:38.939 10:22:58 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:38.939 10:22:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:38.939 10:22:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.939 10:22:58 -- common/autotest_common.sh@10 -- # set +x 00:22:38.939 10:22:58 -- nvmf/common.sh@469 -- # nvmfpid=96278 00:22:38.939 10:22:58 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:38.939 10:22:58 -- nvmf/common.sh@470 -- # waitforlisten 96278 00:22:38.939 10:22:58 -- common/autotest_common.sh@829 -- # '[' -z 96278 ']' 00:22:38.939 10:22:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.939 10:22:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.939 10:22:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.939 10:22:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.939 10:22:58 -- common/autotest_common.sh@10 -- # set +x 00:22:38.939 [2024-11-19 10:22:58.427893] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:38.939 [2024-11-19 10:22:58.427986] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.198 [2024-11-19 10:22:58.586591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.198 [2024-11-19 10:22:58.631715] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:39.198 [2024-11-19 10:22:58.631881] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.198 [2024-11-19 10:22:58.631898] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.198 [2024-11-19 10:22:58.631907] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.198 [2024-11-19 10:22:58.631940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.133 10:22:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.133 10:22:59 -- common/autotest_common.sh@862 -- # return 0 00:22:40.133 10:22:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:40.133 10:22:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:40.133 10:22:59 -- common/autotest_common.sh@10 -- # set +x 00:22:40.133 10:22:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.133 10:22:59 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:40.133 10:22:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.133 10:22:59 -- common/autotest_common.sh@10 -- # set +x 00:22:40.133 [2024-11-19 10:22:59.487524] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.133 [2024-11-19 10:22:59.495682] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:40.133 null0 00:22:40.133 [2024-11-19 10:22:59.527605] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.133 10:22:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.133 10:22:59 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96328 00:22:40.133 10:22:59 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:40.133 10:22:59 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96328 /tmp/host.sock 00:22:40.133 10:22:59 -- common/autotest_common.sh@829 -- # '[' -z 96328 ']' 00:22:40.133 10:22:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:40.133 10:22:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.133 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:40.133 10:22:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:40.133 10:22:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.133 10:22:59 -- common/autotest_common.sh@10 -- # set +x 00:22:40.133 [2024-11-19 10:22:59.595367] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:40.133 [2024-11-19 10:22:59.595452] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96328 ] 00:22:40.392 [2024-11-19 10:22:59.732029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.392 [2024-11-19 10:22:59.774501] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:40.392 [2024-11-19 10:22:59.774687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.392 10:22:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.392 10:22:59 -- common/autotest_common.sh@862 -- # return 0 00:22:40.392 10:22:59 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:40.392 10:22:59 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:40.392 10:22:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.392 10:22:59 -- common/autotest_common.sh@10 -- # set +x 00:22:40.392 10:22:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.392 10:22:59 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:40.392 10:22:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.392 10:22:59 -- common/autotest_common.sh@10 -- # set +x 00:22:40.392 10:22:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.392 10:22:59 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:40.392 10:22:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.392 10:22:59 -- common/autotest_common.sh@10 -- # set +x 00:22:41.767 [2024-11-19 10:23:00.951113] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:41.767 [2024-11-19 10:23:00.951178] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:41.767 [2024-11-19 10:23:00.951200] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:41.767 [2024-11-19 10:23:01.037270] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:41.767 [2024-11-19 10:23:01.093256] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:41.767 [2024-11-19 10:23:01.093321] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:41.768 [2024-11-19 10:23:01.093350] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:41.768 [2024-11-19 10:23:01.093371] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:41.768 [2024-11-19 10:23:01.093402] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:41.768 10:23:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:41.768 [2024-11-19 10:23:01.099296] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1bbf6c0 was disconnected and freed. delete nvme_qpair. 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.768 10:23:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.768 10:23:01 -- common/autotest_common.sh@10 -- # set +x 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:41.768 10:23:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.768 10:23:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.768 10:23:01 -- common/autotest_common.sh@10 -- # set +x 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:41.768 10:23:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:41.768 10:23:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:42.702 10:23:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:42.702 10:23:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.702 10:23:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:42.702 10:23:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.702 10:23:02 -- common/autotest_common.sh@10 -- # set +x 00:22:42.702 10:23:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:42.702 10:23:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:42.960 10:23:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.960 10:23:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:42.960 10:23:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:43.894 10:23:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:43.894 10:23:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.894 10:23:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:43.894 10:23:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:43.894 10:23:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.894 10:23:03 -- common/autotest_common.sh@10 -- # set +x 00:22:43.894 10:23:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:43.894 10:23:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.894 10:23:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:43.894 10:23:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:44.828 10:23:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:44.828 10:23:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:44.828 10:23:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.828 10:23:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.828 10:23:04 -- common/autotest_common.sh@10 -- # set +x 00:22:44.828 10:23:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:44.828 10:23:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:44.828 10:23:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.111 10:23:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:45.111 10:23:04 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:46.083 10:23:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:46.083 10:23:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.083 10:23:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:46.083 10:23:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.083 10:23:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:46.083 10:23:05 -- common/autotest_common.sh@10 -- # set +x 00:22:46.083 10:23:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:46.083 10:23:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.083 10:23:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:46.083 10:23:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:47.017 10:23:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:47.017 10:23:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.017 10:23:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.017 10:23:06 -- common/autotest_common.sh@10 -- # set +x 00:22:47.017 10:23:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:47.017 10:23:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:47.017 10:23:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:47.017 10:23:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.017 10:23:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:47.017 10:23:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:47.017 [2024-11-19 10:23:06.521201] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:47.017 [2024-11-19 10:23:06.521274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.017 [2024-11-19 10:23:06.521291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.017 [2024-11-19 10:23:06.521305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.017 [2024-11-19 10:23:06.521314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.017 [2024-11-19 10:23:06.521325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.017 [2024-11-19 10:23:06.521334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.017 [2024-11-19 10:23:06.521344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.017 [2024-11-19 10:23:06.521353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.017 [2024-11-19 10:23:06.521363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:47.017 [2024-11-19 10:23:06.521372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.017 [2024-11-19 10:23:06.521382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9b4b0 is same with the state(5) to be set 00:22:47.017 [2024-11-19 10:23:06.531196] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9b4b0 (9): Bad file descriptor 00:22:47.017 [2024-11-19 10:23:06.541230] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:48.392 10:23:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:48.392 10:23:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:48.392 10:23:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:48.392 10:23:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:48.392 10:23:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:48.392 10:23:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.392 10:23:07 -- common/autotest_common.sh@10 -- # set +x 00:22:48.392 [2024-11-19 10:23:07.592917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:49.326 [2024-11-19 10:23:08.616884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:49.326 [2024-11-19 10:23:08.617503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b9b4b0 with addr=10.0.0.2, port=4420 00:22:49.326 [2024-11-19 10:23:08.617650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9b4b0 is same with the state(5) to be set 00:22:49.326 [2024-11-19 10:23:08.617807] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:49.326 [2024-11-19 10:23:08.617976] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:49.326 [2024-11-19 10:23:08.618114] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:49.326 [2024-11-19 10:23:08.618229] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:49.326 [2024-11-19 10:23:08.619666] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9b4b0 (9): Bad file descriptor 00:22:49.326 [2024-11-19 10:23:08.619937] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.326 [2024-11-19 10:23:08.620092] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:49.326 [2024-11-19 10:23:08.620159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.326 [2024-11-19 10:23:08.620281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-11-19 10:23:08.620403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.326 [2024-11-19 10:23:08.620521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-11-19 10:23:08.620613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.326 [2024-11-19 10:23:08.620739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-11-19 10:23:08.620854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.326 [2024-11-19 10:23:08.620971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-11-19 10:23:08.621082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:49.326 [2024-11-19 10:23:08.621194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.326 [2024-11-19 10:23:08.621301] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:49.326 [2024-11-19 10:23:08.621432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b868f0 (9): Bad file descriptor 00:22:49.326 [2024-11-19 10:23:08.621656] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:49.326 [2024-11-19 10:23:08.621797] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:49.326 10:23:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.326 10:23:08 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:49.326 10:23:08 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:50.260 10:23:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:50.260 10:23:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.260 10:23:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:50.260 10:23:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.260 10:23:09 -- common/autotest_common.sh@10 -- # set +x 00:22:50.260 10:23:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:50.260 10:23:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:50.260 10:23:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.260 10:23:09 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:50.260 10:23:09 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:50.260 10:23:09 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:50.260 10:23:09 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:50.260 10:23:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:50.260 10:23:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.260 10:23:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:50.260 10:23:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.260 10:23:09 -- common/autotest_common.sh@10 -- # set +x 00:22:50.260 10:23:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:50.260 10:23:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:50.260 10:23:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.260 10:23:09 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:50.260 10:23:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:51.193 [2024-11-19 10:23:10.632858] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:51.193 [2024-11-19 10:23:10.632921] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:51.193 [2024-11-19 10:23:10.632954] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:51.193 [2024-11-19 10:23:10.719084] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:51.450 [2024-11-19 10:23:10.774226] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:51.450 [2024-11-19 10:23:10.774284] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:51.450 [2024-11-19 10:23:10.774308] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:51.450 [2024-11-19 10:23:10.774325] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:51.450 [2024-11-19 10:23:10.774334] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:51.450 10:23:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:51.450 10:23:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.450 10:23:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:51.450 10:23:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.450 10:23:10 -- common/autotest_common.sh@10 -- # set +x 00:22:51.450 10:23:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:51.450 10:23:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:51.450 [2024-11-19 10:23:10.781094] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1bca330 was disconnected and freed. delete nvme_qpair. 00:22:51.450 10:23:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.450 10:23:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:51.450 10:23:10 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:51.450 10:23:10 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96328 00:22:51.450 10:23:10 -- common/autotest_common.sh@936 -- # '[' -z 96328 ']' 00:22:51.450 10:23:10 -- common/autotest_common.sh@940 -- # kill -0 96328 00:22:51.450 10:23:10 -- common/autotest_common.sh@941 -- # uname 00:22:51.450 10:23:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:51.450 10:23:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96328 00:22:51.450 10:23:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:51.450 10:23:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:51.450 killing process with pid 96328 00:22:51.450 10:23:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96328' 00:22:51.450 10:23:10 -- common/autotest_common.sh@955 -- # kill 96328 00:22:51.450 10:23:10 -- common/autotest_common.sh@960 -- # wait 96328 00:22:51.708 10:23:10 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:51.708 10:23:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:51.708 10:23:10 -- nvmf/common.sh@116 -- # sync 00:22:51.708 10:23:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:51.708 10:23:11 -- nvmf/common.sh@119 -- # set +e 00:22:51.708 10:23:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:51.708 10:23:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:51.708 rmmod nvme_tcp 00:22:51.708 rmmod nvme_fabrics 00:22:51.708 rmmod nvme_keyring 00:22:51.708 10:23:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:51.708 10:23:11 -- nvmf/common.sh@123 -- # set -e 00:22:51.708 10:23:11 -- nvmf/common.sh@124 -- # return 0 00:22:51.708 10:23:11 -- nvmf/common.sh@477 -- # '[' -n 96278 ']' 00:22:51.708 10:23:11 -- nvmf/common.sh@478 -- # killprocess 96278 00:22:51.708 10:23:11 -- common/autotest_common.sh@936 -- # '[' -z 96278 ']' 00:22:51.708 10:23:11 -- common/autotest_common.sh@940 -- # kill -0 96278 00:22:51.708 10:23:11 -- common/autotest_common.sh@941 -- # uname 00:22:51.708 10:23:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:51.708 10:23:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96278 00:22:51.708 killing process with pid 96278 00:22:51.708 10:23:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:51.708 10:23:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:51.708 10:23:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96278' 00:22:51.708 10:23:11 -- common/autotest_common.sh@955 -- # kill 96278 00:22:51.708 10:23:11 -- common/autotest_common.sh@960 -- # wait 96278 00:22:51.967 10:23:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:51.967 10:23:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:51.967 10:23:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:51.967 10:23:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:51.967 10:23:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:51.967 10:23:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.967 10:23:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.967 10:23:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.967 10:23:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:51.967 00:22:51.967 real 0m13.484s 00:22:51.967 user 0m22.871s 00:22:51.967 sys 0m1.392s 00:22:51.967 10:23:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:51.967 ************************************ 00:22:51.967 10:23:11 -- common/autotest_common.sh@10 -- # set +x 00:22:51.967 END TEST nvmf_discovery_remove_ifc 00:22:51.967 ************************************ 00:22:51.967 10:23:11 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:51.967 10:23:11 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:51.967 10:23:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:51.967 10:23:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:51.967 10:23:11 -- common/autotest_common.sh@10 -- # set +x 00:22:51.967 ************************************ 00:22:51.967 START TEST nvmf_digest 00:22:51.967 ************************************ 00:22:51.967 10:23:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:51.967 * Looking for test storage... 00:22:51.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:51.967 10:23:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:51.967 10:23:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:51.967 10:23:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:51.967 10:23:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:51.967 10:23:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:51.967 10:23:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:51.967 10:23:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:51.967 10:23:11 -- scripts/common.sh@335 -- # IFS=.-: 00:22:51.967 10:23:11 -- scripts/common.sh@335 -- # read -ra ver1 00:22:51.967 10:23:11 -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.967 10:23:11 -- scripts/common.sh@336 -- # read -ra ver2 00:22:51.967 10:23:11 -- scripts/common.sh@337 -- # local 'op=<' 00:22:51.967 10:23:11 -- scripts/common.sh@339 -- # ver1_l=2 00:22:51.967 10:23:11 -- scripts/common.sh@340 -- # ver2_l=1 00:22:51.967 10:23:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:51.967 10:23:11 -- scripts/common.sh@343 -- # case "$op" in 00:22:51.967 10:23:11 -- scripts/common.sh@344 -- # : 1 00:22:51.967 10:23:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:51.967 10:23:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.226 10:23:11 -- scripts/common.sh@364 -- # decimal 1 00:22:52.226 10:23:11 -- scripts/common.sh@352 -- # local d=1 00:22:52.226 10:23:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.226 10:23:11 -- scripts/common.sh@354 -- # echo 1 00:22:52.226 10:23:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:52.226 10:23:11 -- scripts/common.sh@365 -- # decimal 2 00:22:52.226 10:23:11 -- scripts/common.sh@352 -- # local d=2 00:22:52.226 10:23:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.226 10:23:11 -- scripts/common.sh@354 -- # echo 2 00:22:52.226 10:23:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:52.226 10:23:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:52.226 10:23:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:52.226 10:23:11 -- scripts/common.sh@367 -- # return 0 00:22:52.226 10:23:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.226 10:23:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:52.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.226 --rc genhtml_branch_coverage=1 00:22:52.226 --rc genhtml_function_coverage=1 00:22:52.226 --rc genhtml_legend=1 00:22:52.227 --rc geninfo_all_blocks=1 00:22:52.227 --rc geninfo_unexecuted_blocks=1 00:22:52.227 00:22:52.227 ' 00:22:52.227 10:23:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:52.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.227 --rc genhtml_branch_coverage=1 00:22:52.227 --rc genhtml_function_coverage=1 00:22:52.227 --rc genhtml_legend=1 00:22:52.227 --rc geninfo_all_blocks=1 00:22:52.227 --rc geninfo_unexecuted_blocks=1 00:22:52.227 00:22:52.227 ' 00:22:52.227 10:23:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:52.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.227 --rc genhtml_branch_coverage=1 00:22:52.227 --rc genhtml_function_coverage=1 00:22:52.227 --rc genhtml_legend=1 00:22:52.227 --rc geninfo_all_blocks=1 00:22:52.227 --rc geninfo_unexecuted_blocks=1 00:22:52.227 00:22:52.227 ' 00:22:52.227 10:23:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:52.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.227 --rc genhtml_branch_coverage=1 00:22:52.227 --rc genhtml_function_coverage=1 00:22:52.227 --rc genhtml_legend=1 00:22:52.227 --rc geninfo_all_blocks=1 00:22:52.227 --rc geninfo_unexecuted_blocks=1 00:22:52.227 00:22:52.227 ' 00:22:52.227 10:23:11 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:52.227 10:23:11 -- nvmf/common.sh@7 -- # uname -s 00:22:52.227 10:23:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.227 10:23:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.227 10:23:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.227 10:23:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.227 10:23:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.227 10:23:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.227 10:23:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.227 10:23:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.227 10:23:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.227 10:23:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.227 10:23:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:22:52.227 10:23:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:22:52.227 10:23:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.227 10:23:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.227 10:23:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:52.227 10:23:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:52.227 10:23:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.227 10:23:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.227 10:23:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.227 10:23:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.227 10:23:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.227 10:23:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.227 10:23:11 -- paths/export.sh@5 -- # export PATH 00:22:52.227 10:23:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.227 10:23:11 -- nvmf/common.sh@46 -- # : 0 00:22:52.227 10:23:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:52.227 10:23:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:52.227 10:23:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:52.227 10:23:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.227 10:23:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.227 10:23:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:52.227 10:23:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:52.227 10:23:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:52.227 10:23:11 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:52.227 10:23:11 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:52.227 10:23:11 -- host/digest.sh@16 -- # runtime=2 00:22:52.227 10:23:11 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:52.227 10:23:11 -- host/digest.sh@132 -- # nvmftestinit 00:22:52.227 10:23:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:52.227 10:23:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.227 10:23:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:52.227 10:23:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:52.227 10:23:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:52.227 10:23:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.227 10:23:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.227 10:23:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.227 10:23:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:52.227 10:23:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:52.227 10:23:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:52.227 10:23:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:52.227 10:23:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:52.227 10:23:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:52.227 10:23:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.227 10:23:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.227 10:23:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:52.227 10:23:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:52.227 10:23:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:52.227 10:23:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:52.227 10:23:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:52.227 10:23:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.227 10:23:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:52.227 10:23:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:52.227 10:23:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:52.227 10:23:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:52.227 10:23:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:52.227 10:23:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:52.227 Cannot find device "nvmf_tgt_br" 00:22:52.227 10:23:11 -- nvmf/common.sh@154 -- # true 00:22:52.227 10:23:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:52.227 Cannot find device "nvmf_tgt_br2" 00:22:52.227 10:23:11 -- nvmf/common.sh@155 -- # true 00:22:52.227 10:23:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:52.227 10:23:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:52.227 Cannot find device "nvmf_tgt_br" 00:22:52.227 10:23:11 -- nvmf/common.sh@157 -- # true 00:22:52.227 10:23:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:52.227 Cannot find device "nvmf_tgt_br2" 00:22:52.227 10:23:11 -- nvmf/common.sh@158 -- # true 00:22:52.227 10:23:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:52.227 10:23:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:52.227 10:23:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:52.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.227 10:23:11 -- nvmf/common.sh@161 -- # true 00:22:52.227 10:23:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:52.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.227 10:23:11 -- nvmf/common.sh@162 -- # true 00:22:52.227 10:23:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:52.227 10:23:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:52.227 10:23:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:52.227 10:23:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:52.227 10:23:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:52.227 10:23:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:52.486 10:23:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:52.486 10:23:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:52.486 10:23:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:52.486 10:23:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:52.486 10:23:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:52.486 10:23:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:52.486 10:23:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:52.486 10:23:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:52.486 10:23:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:52.486 10:23:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:52.486 10:23:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:52.486 10:23:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:52.486 10:23:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:52.486 10:23:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:52.486 10:23:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:52.486 10:23:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:52.486 10:23:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:52.486 10:23:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:52.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:22:52.486 00:22:52.486 --- 10.0.0.2 ping statistics --- 00:22:52.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.486 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:22:52.486 10:23:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:52.486 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:52.486 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:22:52.486 00:22:52.486 --- 10.0.0.3 ping statistics --- 00:22:52.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.486 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:22:52.486 10:23:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:52.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:22:52.486 00:22:52.486 --- 10.0.0.1 ping statistics --- 00:22:52.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.486 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:22:52.486 10:23:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.486 10:23:11 -- nvmf/common.sh@421 -- # return 0 00:22:52.486 10:23:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:52.486 10:23:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.486 10:23:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:52.486 10:23:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:52.486 10:23:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.486 10:23:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:52.486 10:23:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:52.486 10:23:11 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:52.486 10:23:11 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:52.486 10:23:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:52.486 10:23:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:52.486 10:23:11 -- common/autotest_common.sh@10 -- # set +x 00:22:52.486 ************************************ 00:22:52.486 START TEST nvmf_digest_clean 00:22:52.486 ************************************ 00:22:52.486 10:23:11 -- common/autotest_common.sh@1114 -- # run_digest 00:22:52.486 10:23:11 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:52.486 10:23:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:52.486 10:23:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:52.486 10:23:11 -- common/autotest_common.sh@10 -- # set +x 00:22:52.486 10:23:11 -- nvmf/common.sh@469 -- # nvmfpid=96729 00:22:52.486 10:23:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:52.486 10:23:11 -- nvmf/common.sh@470 -- # waitforlisten 96729 00:22:52.486 10:23:11 -- common/autotest_common.sh@829 -- # '[' -z 96729 ']' 00:22:52.486 10:23:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.486 10:23:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.486 10:23:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.486 10:23:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.486 10:23:11 -- common/autotest_common.sh@10 -- # set +x 00:22:52.486 [2024-11-19 10:23:12.008451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:52.486 [2024-11-19 10:23:12.008584] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.745 [2024-11-19 10:23:12.167508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.745 [2024-11-19 10:23:12.202124] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:52.745 [2024-11-19 10:23:12.202263] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.745 [2024-11-19 10:23:12.202276] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.745 [2024-11-19 10:23:12.202284] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.745 [2024-11-19 10:23:12.202315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.745 10:23:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.745 10:23:12 -- common/autotest_common.sh@862 -- # return 0 00:22:52.745 10:23:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:52.745 10:23:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:52.745 10:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:52.745 10:23:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.745 10:23:12 -- host/digest.sh@120 -- # common_target_config 00:22:52.745 10:23:12 -- host/digest.sh@43 -- # rpc_cmd 00:22:52.745 10:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.745 10:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:53.003 null0 00:22:53.003 [2024-11-19 10:23:12.339515] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.003 [2024-11-19 10:23:12.363643] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.003 10:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.003 10:23:12 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:53.003 10:23:12 -- host/digest.sh@77 -- # local rw bs qd 00:22:53.003 10:23:12 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:53.003 10:23:12 -- host/digest.sh@80 -- # rw=randread 00:22:53.003 10:23:12 -- host/digest.sh@80 -- # bs=4096 00:22:53.003 10:23:12 -- host/digest.sh@80 -- # qd=128 00:22:53.003 10:23:12 -- host/digest.sh@82 -- # bperfpid=96771 00:22:53.003 10:23:12 -- host/digest.sh@83 -- # waitforlisten 96771 /var/tmp/bperf.sock 00:22:53.003 10:23:12 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:53.003 10:23:12 -- common/autotest_common.sh@829 -- # '[' -z 96771 ']' 00:22:53.003 10:23:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:53.003 10:23:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:53.003 10:23:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:53.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:53.003 10:23:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:53.003 10:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:53.003 [2024-11-19 10:23:12.424305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:53.003 [2024-11-19 10:23:12.424427] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96771 ] 00:22:53.261 [2024-11-19 10:23:12.569628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.261 [2024-11-19 10:23:12.612657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.261 10:23:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.262 10:23:12 -- common/autotest_common.sh@862 -- # return 0 00:22:53.262 10:23:12 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:53.262 10:23:12 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:53.262 10:23:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:53.520 10:23:12 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:53.520 10:23:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:54.087 nvme0n1 00:22:54.087 10:23:13 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:54.087 10:23:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:54.087 Running I/O for 2 seconds... 00:22:55.988 00:22:55.988 Latency(us) 00:22:55.988 [2024-11-19T10:23:15.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.988 [2024-11-19T10:23:15.534Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:55.988 nvme0n1 : 2.00 18082.68 70.64 0.00 0.00 7071.48 3306.59 24069.59 00:22:55.988 [2024-11-19T10:23:15.535Z] =================================================================================================================== 00:22:55.989 [2024-11-19T10:23:15.535Z] Total : 18082.68 70.64 0.00 0.00 7071.48 3306.59 24069.59 00:22:55.989 0 00:22:56.246 10:23:15 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:56.246 10:23:15 -- host/digest.sh@92 -- # get_accel_stats 00:22:56.246 10:23:15 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:56.247 10:23:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:56.247 10:23:15 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:56.247 | select(.opcode=="crc32c") 00:22:56.247 | "\(.module_name) \(.executed)"' 00:22:56.505 10:23:15 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:56.505 10:23:15 -- host/digest.sh@93 -- # exp_module=software 00:22:56.505 10:23:15 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:56.505 10:23:15 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:56.505 10:23:15 -- host/digest.sh@97 -- # killprocess 96771 00:22:56.505 10:23:15 -- common/autotest_common.sh@936 -- # '[' -z 96771 ']' 00:22:56.505 10:23:15 -- common/autotest_common.sh@940 -- # kill -0 96771 00:22:56.505 10:23:15 -- common/autotest_common.sh@941 -- # uname 00:22:56.505 10:23:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:56.505 10:23:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96771 00:22:56.505 10:23:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:56.505 killing process with pid 96771 00:22:56.505 10:23:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:56.505 10:23:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96771' 00:22:56.505 Received shutdown signal, test time was about 2.000000 seconds 00:22:56.505 00:22:56.505 Latency(us) 00:22:56.505 [2024-11-19T10:23:16.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.505 [2024-11-19T10:23:16.051Z] =================================================================================================================== 00:22:56.505 [2024-11-19T10:23:16.051Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.505 10:23:15 -- common/autotest_common.sh@955 -- # kill 96771 00:22:56.505 10:23:15 -- common/autotest_common.sh@960 -- # wait 96771 00:22:56.763 10:23:16 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:56.763 10:23:16 -- host/digest.sh@77 -- # local rw bs qd 00:22:56.763 10:23:16 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:56.763 10:23:16 -- host/digest.sh@80 -- # rw=randread 00:22:56.763 10:23:16 -- host/digest.sh@80 -- # bs=131072 00:22:56.763 10:23:16 -- host/digest.sh@80 -- # qd=16 00:22:56.763 10:23:16 -- host/digest.sh@82 -- # bperfpid=96841 00:22:56.763 10:23:16 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:56.763 10:23:16 -- host/digest.sh@83 -- # waitforlisten 96841 /var/tmp/bperf.sock 00:22:56.763 10:23:16 -- common/autotest_common.sh@829 -- # '[' -z 96841 ']' 00:22:56.763 10:23:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:56.763 10:23:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:56.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:56.763 10:23:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:56.763 10:23:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:56.763 10:23:16 -- common/autotest_common.sh@10 -- # set +x 00:22:56.763 [2024-11-19 10:23:16.111175] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:56.763 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:56.763 Zero copy mechanism will not be used. 00:22:56.763 [2024-11-19 10:23:16.111304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96841 ] 00:22:56.763 [2024-11-19 10:23:16.255797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.763 [2024-11-19 10:23:16.294544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.023 10:23:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.023 10:23:16 -- common/autotest_common.sh@862 -- # return 0 00:22:57.023 10:23:16 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:57.023 10:23:16 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:57.023 10:23:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:57.282 10:23:16 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:57.282 10:23:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:57.848 nvme0n1 00:22:57.848 10:23:17 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:57.848 10:23:17 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:57.848 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:57.848 Zero copy mechanism will not be used. 00:22:57.848 Running I/O for 2 seconds... 00:22:59.802 00:22:59.802 Latency(us) 00:22:59.802 [2024-11-19T10:23:19.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.802 [2024-11-19T10:23:19.348Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:59.802 nvme0n1 : 2.00 7883.37 985.42 0.00 0.00 2026.18 677.70 3768.32 00:22:59.802 [2024-11-19T10:23:19.348Z] =================================================================================================================== 00:22:59.802 [2024-11-19T10:23:19.348Z] Total : 7883.37 985.42 0.00 0.00 2026.18 677.70 3768.32 00:22:59.802 0 00:22:59.802 10:23:19 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:59.802 10:23:19 -- host/digest.sh@92 -- # get_accel_stats 00:22:59.802 10:23:19 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:59.802 10:23:19 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:59.802 | select(.opcode=="crc32c") 00:22:59.802 | "\(.module_name) \(.executed)"' 00:22:59.802 10:23:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:00.065 10:23:19 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:00.065 10:23:19 -- host/digest.sh@93 -- # exp_module=software 00:23:00.065 10:23:19 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:00.065 10:23:19 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:00.065 10:23:19 -- host/digest.sh@97 -- # killprocess 96841 00:23:00.065 10:23:19 -- common/autotest_common.sh@936 -- # '[' -z 96841 ']' 00:23:00.065 10:23:19 -- common/autotest_common.sh@940 -- # kill -0 96841 00:23:00.065 10:23:19 -- common/autotest_common.sh@941 -- # uname 00:23:00.065 10:23:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:00.065 10:23:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96841 00:23:00.324 10:23:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:00.324 killing process with pid 96841 00:23:00.324 10:23:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:00.324 10:23:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96841' 00:23:00.324 Received shutdown signal, test time was about 2.000000 seconds 00:23:00.324 00:23:00.324 Latency(us) 00:23:00.324 [2024-11-19T10:23:19.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.324 [2024-11-19T10:23:19.870Z] =================================================================================================================== 00:23:00.324 [2024-11-19T10:23:19.870Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:00.324 10:23:19 -- common/autotest_common.sh@955 -- # kill 96841 00:23:00.324 10:23:19 -- common/autotest_common.sh@960 -- # wait 96841 00:23:00.324 10:23:19 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:23:00.324 10:23:19 -- host/digest.sh@77 -- # local rw bs qd 00:23:00.324 10:23:19 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:00.324 10:23:19 -- host/digest.sh@80 -- # rw=randwrite 00:23:00.324 10:23:19 -- host/digest.sh@80 -- # bs=4096 00:23:00.324 10:23:19 -- host/digest.sh@80 -- # qd=128 00:23:00.324 10:23:19 -- host/digest.sh@82 -- # bperfpid=96918 00:23:00.324 10:23:19 -- host/digest.sh@83 -- # waitforlisten 96918 /var/tmp/bperf.sock 00:23:00.324 10:23:19 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:00.324 10:23:19 -- common/autotest_common.sh@829 -- # '[' -z 96918 ']' 00:23:00.324 10:23:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:00.324 10:23:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:00.324 10:23:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:00.324 10:23:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.324 10:23:19 -- common/autotest_common.sh@10 -- # set +x 00:23:00.324 [2024-11-19 10:23:19.807748] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:00.324 [2024-11-19 10:23:19.807871] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96918 ] 00:23:00.582 [2024-11-19 10:23:19.941249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.582 [2024-11-19 10:23:19.975632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.582 10:23:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.582 10:23:20 -- common/autotest_common.sh@862 -- # return 0 00:23:00.582 10:23:20 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:00.582 10:23:20 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:00.582 10:23:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:00.841 10:23:20 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:00.841 10:23:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:01.408 nvme0n1 00:23:01.408 10:23:20 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:01.408 10:23:20 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:01.408 Running I/O for 2 seconds... 00:23:03.941 00:23:03.941 Latency(us) 00:23:03.941 [2024-11-19T10:23:23.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.941 [2024-11-19T10:23:23.487Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:03.941 nvme0n1 : 2.00 21900.66 85.55 0.00 0.00 5837.61 2383.13 14060.45 00:23:03.941 [2024-11-19T10:23:23.487Z] =================================================================================================================== 00:23:03.941 [2024-11-19T10:23:23.487Z] Total : 21900.66 85.55 0.00 0.00 5837.61 2383.13 14060.45 00:23:03.941 0 00:23:03.941 10:23:22 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:03.941 10:23:22 -- host/digest.sh@92 -- # get_accel_stats 00:23:03.941 10:23:22 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:03.941 10:23:22 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:03.941 | select(.opcode=="crc32c") 00:23:03.941 | "\(.module_name) \(.executed)"' 00:23:03.941 10:23:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:03.941 10:23:23 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:03.941 10:23:23 -- host/digest.sh@93 -- # exp_module=software 00:23:03.941 10:23:23 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:03.941 10:23:23 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:03.941 10:23:23 -- host/digest.sh@97 -- # killprocess 96918 00:23:03.941 10:23:23 -- common/autotest_common.sh@936 -- # '[' -z 96918 ']' 00:23:03.941 10:23:23 -- common/autotest_common.sh@940 -- # kill -0 96918 00:23:03.941 10:23:23 -- common/autotest_common.sh@941 -- # uname 00:23:03.941 10:23:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:03.941 10:23:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96918 00:23:03.941 killing process with pid 96918 00:23:03.941 Received shutdown signal, test time was about 2.000000 seconds 00:23:03.941 00:23:03.941 Latency(us) 00:23:03.941 [2024-11-19T10:23:23.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.941 [2024-11-19T10:23:23.487Z] =================================================================================================================== 00:23:03.941 [2024-11-19T10:23:23.487Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:03.941 10:23:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:03.941 10:23:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:03.941 10:23:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96918' 00:23:03.941 10:23:23 -- common/autotest_common.sh@955 -- # kill 96918 00:23:03.941 10:23:23 -- common/autotest_common.sh@960 -- # wait 96918 00:23:03.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:03.941 10:23:23 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:23:03.941 10:23:23 -- host/digest.sh@77 -- # local rw bs qd 00:23:03.941 10:23:23 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:03.941 10:23:23 -- host/digest.sh@80 -- # rw=randwrite 00:23:03.941 10:23:23 -- host/digest.sh@80 -- # bs=131072 00:23:03.941 10:23:23 -- host/digest.sh@80 -- # qd=16 00:23:03.941 10:23:23 -- host/digest.sh@82 -- # bperfpid=96990 00:23:03.941 10:23:23 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:03.941 10:23:23 -- host/digest.sh@83 -- # waitforlisten 96990 /var/tmp/bperf.sock 00:23:03.941 10:23:23 -- common/autotest_common.sh@829 -- # '[' -z 96990 ']' 00:23:03.942 10:23:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:03.942 10:23:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:03.942 10:23:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:03.942 10:23:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:03.942 10:23:23 -- common/autotest_common.sh@10 -- # set +x 00:23:03.942 [2024-11-19 10:23:23.430016] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:03.942 [2024-11-19 10:23:23.430363] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96990 ] 00:23:03.942 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:03.942 Zero copy mechanism will not be used. 00:23:04.200 [2024-11-19 10:23:23.567519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.200 [2024-11-19 10:23:23.606246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.200 10:23:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:04.200 10:23:23 -- common/autotest_common.sh@862 -- # return 0 00:23:04.200 10:23:23 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:23:04.200 10:23:23 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:23:04.200 10:23:23 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:04.766 10:23:24 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:04.766 10:23:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:05.024 nvme0n1 00:23:05.024 10:23:24 -- host/digest.sh@91 -- # bperf_py perform_tests 00:23:05.024 10:23:24 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:05.284 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:05.284 Zero copy mechanism will not be used. 00:23:05.284 Running I/O for 2 seconds... 00:23:07.188 00:23:07.188 Latency(us) 00:23:07.188 [2024-11-19T10:23:26.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.188 [2024-11-19T10:23:26.734Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:07.188 nvme0n1 : 2.00 6963.48 870.44 0.00 0.00 2292.59 1817.13 11260.28 00:23:07.188 [2024-11-19T10:23:26.734Z] =================================================================================================================== 00:23:07.188 [2024-11-19T10:23:26.734Z] Total : 6963.48 870.44 0.00 0.00 2292.59 1817.13 11260.28 00:23:07.188 0 00:23:07.188 10:23:26 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:23:07.188 10:23:26 -- host/digest.sh@92 -- # get_accel_stats 00:23:07.188 10:23:26 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:07.188 10:23:26 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:07.188 | select(.opcode=="crc32c") 00:23:07.188 | "\(.module_name) \(.executed)"' 00:23:07.188 10:23:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:07.447 10:23:26 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:23:07.447 10:23:26 -- host/digest.sh@93 -- # exp_module=software 00:23:07.447 10:23:26 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:23:07.447 10:23:26 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:07.447 10:23:26 -- host/digest.sh@97 -- # killprocess 96990 00:23:07.447 10:23:26 -- common/autotest_common.sh@936 -- # '[' -z 96990 ']' 00:23:07.447 10:23:26 -- common/autotest_common.sh@940 -- # kill -0 96990 00:23:07.447 10:23:26 -- common/autotest_common.sh@941 -- # uname 00:23:07.447 10:23:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:07.447 10:23:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96990 00:23:07.447 killing process with pid 96990 00:23:07.447 Received shutdown signal, test time was about 2.000000 seconds 00:23:07.447 00:23:07.447 Latency(us) 00:23:07.447 [2024-11-19T10:23:26.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.447 [2024-11-19T10:23:26.993Z] =================================================================================================================== 00:23:07.447 [2024-11-19T10:23:26.993Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:07.447 10:23:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:07.447 10:23:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:07.447 10:23:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96990' 00:23:07.447 10:23:26 -- common/autotest_common.sh@955 -- # kill 96990 00:23:07.447 10:23:26 -- common/autotest_common.sh@960 -- # wait 96990 00:23:07.705 10:23:27 -- host/digest.sh@126 -- # killprocess 96729 00:23:07.705 10:23:27 -- common/autotest_common.sh@936 -- # '[' -z 96729 ']' 00:23:07.705 10:23:27 -- common/autotest_common.sh@940 -- # kill -0 96729 00:23:07.705 10:23:27 -- common/autotest_common.sh@941 -- # uname 00:23:07.705 10:23:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:07.705 10:23:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96729 00:23:07.705 10:23:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:07.705 10:23:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:07.705 killing process with pid 96729 00:23:07.705 10:23:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96729' 00:23:07.705 10:23:27 -- common/autotest_common.sh@955 -- # kill 96729 00:23:07.705 10:23:27 -- common/autotest_common.sh@960 -- # wait 96729 00:23:07.964 00:23:07.964 real 0m15.324s 00:23:07.964 user 0m29.943s 00:23:07.964 sys 0m4.171s 00:23:07.964 10:23:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:07.964 10:23:27 -- common/autotest_common.sh@10 -- # set +x 00:23:07.964 ************************************ 00:23:07.964 END TEST nvmf_digest_clean 00:23:07.964 ************************************ 00:23:07.964 10:23:27 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:23:07.965 10:23:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:07.965 10:23:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:07.965 10:23:27 -- common/autotest_common.sh@10 -- # set +x 00:23:07.965 ************************************ 00:23:07.965 START TEST nvmf_digest_error 00:23:07.965 ************************************ 00:23:07.965 10:23:27 -- common/autotest_common.sh@1114 -- # run_digest_error 00:23:07.965 10:23:27 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:23:07.965 10:23:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:07.965 10:23:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:07.965 10:23:27 -- common/autotest_common.sh@10 -- # set +x 00:23:07.965 10:23:27 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:07.965 10:23:27 -- nvmf/common.sh@469 -- # nvmfpid=97091 00:23:07.965 10:23:27 -- nvmf/common.sh@470 -- # waitforlisten 97091 00:23:07.965 10:23:27 -- common/autotest_common.sh@829 -- # '[' -z 97091 ']' 00:23:07.965 10:23:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.965 10:23:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.965 10:23:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.965 10:23:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.965 10:23:27 -- common/autotest_common.sh@10 -- # set +x 00:23:07.965 [2024-11-19 10:23:27.361519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:07.965 [2024-11-19 10:23:27.361613] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.965 [2024-11-19 10:23:27.499203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.223 [2024-11-19 10:23:27.539732] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:08.223 [2024-11-19 10:23:27.539934] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.223 [2024-11-19 10:23:27.539957] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.223 [2024-11-19 10:23:27.539968] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.223 [2024-11-19 10:23:27.540006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.223 10:23:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.223 10:23:27 -- common/autotest_common.sh@862 -- # return 0 00:23:08.223 10:23:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:08.223 10:23:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:08.223 10:23:27 -- common/autotest_common.sh@10 -- # set +x 00:23:08.223 10:23:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.223 10:23:27 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:08.223 10:23:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.223 10:23:27 -- common/autotest_common.sh@10 -- # set +x 00:23:08.223 [2024-11-19 10:23:27.664456] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:08.223 10:23:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.223 10:23:27 -- host/digest.sh@104 -- # common_target_config 00:23:08.223 10:23:27 -- host/digest.sh@43 -- # rpc_cmd 00:23:08.223 10:23:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.223 10:23:27 -- common/autotest_common.sh@10 -- # set +x 00:23:08.223 null0 00:23:08.223 [2024-11-19 10:23:27.738700] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.223 [2024-11-19 10:23:27.762895] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.223 10:23:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.223 10:23:27 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:23:08.223 10:23:27 -- host/digest.sh@54 -- # local rw bs qd 00:23:08.223 10:23:27 -- host/digest.sh@56 -- # rw=randread 00:23:08.482 10:23:27 -- host/digest.sh@56 -- # bs=4096 00:23:08.482 10:23:27 -- host/digest.sh@56 -- # qd=128 00:23:08.482 10:23:27 -- host/digest.sh@58 -- # bperfpid=97122 00:23:08.482 10:23:27 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:08.482 10:23:27 -- host/digest.sh@60 -- # waitforlisten 97122 /var/tmp/bperf.sock 00:23:08.482 10:23:27 -- common/autotest_common.sh@829 -- # '[' -z 97122 ']' 00:23:08.482 10:23:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:08.482 10:23:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:08.482 10:23:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:08.482 10:23:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.482 10:23:27 -- common/autotest_common.sh@10 -- # set +x 00:23:08.482 [2024-11-19 10:23:27.816670] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:08.482 [2024-11-19 10:23:27.816772] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97122 ] 00:23:08.482 [2024-11-19 10:23:27.968750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.482 [2024-11-19 10:23:28.014211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.741 10:23:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.741 10:23:28 -- common/autotest_common.sh@862 -- # return 0 00:23:08.741 10:23:28 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:08.741 10:23:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:08.999 10:23:28 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:08.999 10:23:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.999 10:23:28 -- common/autotest_common.sh@10 -- # set +x 00:23:08.999 10:23:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.999 10:23:28 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:08.999 10:23:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:09.566 nvme0n1 00:23:09.566 10:23:28 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:09.566 10:23:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.566 10:23:28 -- common/autotest_common.sh@10 -- # set +x 00:23:09.566 10:23:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.566 10:23:28 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:09.566 10:23:28 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:09.566 Running I/O for 2 seconds... 00:23:09.566 [2024-11-19 10:23:28.997490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.566 [2024-11-19 10:23:28.997550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.566 [2024-11-19 10:23:28.997566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.566 [2024-11-19 10:23:29.010198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.566 [2024-11-19 10:23:29.010240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.566 [2024-11-19 10:23:29.010255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.566 [2024-11-19 10:23:29.025864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.566 [2024-11-19 10:23:29.025909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.566 [2024-11-19 10:23:29.025924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.566 [2024-11-19 10:23:29.040279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.566 [2024-11-19 10:23:29.040318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.566 [2024-11-19 10:23:29.040333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.566 [2024-11-19 10:23:29.052632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.566 [2024-11-19 10:23:29.052672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.566 [2024-11-19 10:23:29.052686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.566 [2024-11-19 10:23:29.067157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.566 [2024-11-19 10:23:29.067195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.566 [2024-11-19 10:23:29.067209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.566 [2024-11-19 10:23:29.083207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.566 [2024-11-19 10:23:29.083251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.566 [2024-11-19 10:23:29.083266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.566 [2024-11-19 10:23:29.098724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.566 [2024-11-19 10:23:29.098768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.566 [2024-11-19 10:23:29.098782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.825 [2024-11-19 10:23:29.114665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.825 [2024-11-19 10:23:29.114706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.825 [2024-11-19 10:23:29.114720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.825 [2024-11-19 10:23:29.131222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.825 [2024-11-19 10:23:29.131261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.825 [2024-11-19 10:23:29.131275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.825 [2024-11-19 10:23:29.145014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.825 [2024-11-19 10:23:29.145054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.825 [2024-11-19 10:23:29.145069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.825 [2024-11-19 10:23:29.157570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.825 [2024-11-19 10:23:29.157611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.825 [2024-11-19 10:23:29.157625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.825 [2024-11-19 10:23:29.173073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.825 [2024-11-19 10:23:29.173117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.825 [2024-11-19 10:23:29.173132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.825 [2024-11-19 10:23:29.187981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.825 [2024-11-19 10:23:29.188020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.825 [2024-11-19 10:23:29.188034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.825 [2024-11-19 10:23:29.201363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.825 [2024-11-19 10:23:29.201403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.825 [2024-11-19 10:23:29.201417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.825 [2024-11-19 10:23:29.213061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.825 [2024-11-19 10:23:29.213104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.825 [2024-11-19 10:23:29.213118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.825 [2024-11-19 10:23:29.225993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.825 [2024-11-19 10:23:29.226034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.825 [2024-11-19 10:23:29.226048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.825 [2024-11-19 10:23:29.239123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.825 [2024-11-19 10:23:29.239182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.825 [2024-11-19 10:23:29.239197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.825 [2024-11-19 10:23:29.253593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.825 [2024-11-19 10:23:29.253633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.825 [2024-11-19 10:23:29.253647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.825 [2024-11-19 10:23:29.268674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.825 [2024-11-19 10:23:29.268712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.825 [2024-11-19 10:23:29.268726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.825 [2024-11-19 10:23:29.279554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.825 [2024-11-19 10:23:29.279593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.825 [2024-11-19 10:23:29.279608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.825 [2024-11-19 10:23:29.295168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.825 [2024-11-19 10:23:29.295211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.825 [2024-11-19 10:23:29.295225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.825 [2024-11-19 10:23:29.308867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.825 [2024-11-19 10:23:29.308904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.825 [2024-11-19 10:23:29.308918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.825 [2024-11-19 10:23:29.323965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.825 [2024-11-19 10:23:29.324001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.826 [2024-11-19 10:23:29.324016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.826 [2024-11-19 10:23:29.338644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.826 [2024-11-19 10:23:29.338683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.826 [2024-11-19 10:23:29.338696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.826 [2024-11-19 10:23:29.354736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.826 [2024-11-19 10:23:29.354776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.826 [2024-11-19 10:23:29.354792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:09.826 [2024-11-19 10:23:29.366158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:09.826 [2024-11-19 10:23:29.366195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.826 [2024-11-19 10:23:29.366209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.084 [2024-11-19 10:23:29.378078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.084 [2024-11-19 10:23:29.378114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.084 [2024-11-19 10:23:29.378128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.084 [2024-11-19 10:23:29.393471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.084 [2024-11-19 10:23:29.393509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.084 [2024-11-19 10:23:29.393523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.084 [2024-11-19 10:23:29.403949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.084 [2024-11-19 10:23:29.403985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.084 [2024-11-19 10:23:29.403998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.084 [2024-11-19 10:23:29.423154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.084 [2024-11-19 10:23:29.423191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.084 [2024-11-19 10:23:29.423204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.084 [2024-11-19 10:23:29.434881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.084 [2024-11-19 10:23:29.434915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.084 [2024-11-19 10:23:29.434929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.084 [2024-11-19 10:23:29.450022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.084 [2024-11-19 10:23:29.450061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.084 [2024-11-19 10:23:29.450074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.084 [2024-11-19 10:23:29.464838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.084 [2024-11-19 10:23:29.464879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.084 [2024-11-19 10:23:29.464893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.084 [2024-11-19 10:23:29.478551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.084 [2024-11-19 10:23:29.478591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.084 [2024-11-19 10:23:29.478605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.084 [2024-11-19 10:23:29.489385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.084 [2024-11-19 10:23:29.489425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.084 [2024-11-19 10:23:29.489439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.084 [2024-11-19 10:23:29.502948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.084 [2024-11-19 10:23:29.502989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.084 [2024-11-19 10:23:29.503014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.085 [2024-11-19 10:23:29.513241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.085 [2024-11-19 10:23:29.513279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.085 [2024-11-19 10:23:29.513293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.085 [2024-11-19 10:23:29.528447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.085 [2024-11-19 10:23:29.528487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.085 [2024-11-19 10:23:29.528501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.085 [2024-11-19 10:23:29.544049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.085 [2024-11-19 10:23:29.544088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.085 [2024-11-19 10:23:29.544103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.085 [2024-11-19 10:23:29.558806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.085 [2024-11-19 10:23:29.558869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.085 [2024-11-19 10:23:29.558890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.085 [2024-11-19 10:23:29.571584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.085 [2024-11-19 10:23:29.571628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.085 [2024-11-19 10:23:29.571642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.085 [2024-11-19 10:23:29.583336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.085 [2024-11-19 10:23:29.583376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.085 [2024-11-19 10:23:29.583390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.085 [2024-11-19 10:23:29.596045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.085 [2024-11-19 10:23:29.596085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.085 [2024-11-19 10:23:29.596099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.085 [2024-11-19 10:23:29.609638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.085 [2024-11-19 10:23:29.609677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.085 [2024-11-19 10:23:29.609692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.085 [2024-11-19 10:23:29.623073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.085 [2024-11-19 10:23:29.623114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.085 [2024-11-19 10:23:29.623128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.633677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.633720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.633735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.647227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.647268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.647294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.659573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.659613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.659627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.671180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.671221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.671235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.683116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.683155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.683169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.697426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.697466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.697480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.713553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.713592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.713606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.726585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.726625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.726639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.738326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.738364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.738377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.753555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.753594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.753607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.768718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.768757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.768772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.783397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.783435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.783449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.799548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.799586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.799601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.814335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.814377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.814391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.828906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.828946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.828960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.844035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.844077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.844090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.860488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.860531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.860545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.873642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.873679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.873694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.344 [2024-11-19 10:23:29.884238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.344 [2024-11-19 10:23:29.884274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.344 [2024-11-19 10:23:29.884288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.603 [2024-11-19 10:23:29.899771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.603 [2024-11-19 10:23:29.899808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.603 [2024-11-19 10:23:29.899836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.603 [2024-11-19 10:23:29.912531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.603 [2024-11-19 10:23:29.912568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.603 [2024-11-19 10:23:29.912581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.603 [2024-11-19 10:23:29.924197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.603 [2024-11-19 10:23:29.924234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.603 [2024-11-19 10:23:29.924248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.603 [2024-11-19 10:23:29.935769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.603 [2024-11-19 10:23:29.935806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.603 [2024-11-19 10:23:29.935833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.603 [2024-11-19 10:23:29.949844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.604 [2024-11-19 10:23:29.949880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.604 [2024-11-19 10:23:29.949894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.604 [2024-11-19 10:23:29.962569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.604 [2024-11-19 10:23:29.962605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.604 [2024-11-19 10:23:29.962619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.604 [2024-11-19 10:23:29.974334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.604 [2024-11-19 10:23:29.974371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.604 [2024-11-19 10:23:29.974385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.604 [2024-11-19 10:23:29.986602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.604 [2024-11-19 10:23:29.986638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.604 [2024-11-19 10:23:29.986652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.604 [2024-11-19 10:23:30.001115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.604 [2024-11-19 10:23:30.001151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.604 [2024-11-19 10:23:30.001164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.604 [2024-11-19 10:23:30.014669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.604 [2024-11-19 10:23:30.014708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.604 [2024-11-19 10:23:30.014722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.604 [2024-11-19 10:23:30.029541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.604 [2024-11-19 10:23:30.029590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.604 [2024-11-19 10:23:30.029605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.604 [2024-11-19 10:23:30.045254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.604 [2024-11-19 10:23:30.045309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.604 [2024-11-19 10:23:30.045324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.604 [2024-11-19 10:23:30.060518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.604 [2024-11-19 10:23:30.060558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.604 [2024-11-19 10:23:30.060573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.604 [2024-11-19 10:23:30.073504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.604 [2024-11-19 10:23:30.073544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.604 [2024-11-19 10:23:30.073558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.604 [2024-11-19 10:23:30.085254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.604 [2024-11-19 10:23:30.085290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.604 [2024-11-19 10:23:30.085303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.604 [2024-11-19 10:23:30.100202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.604 [2024-11-19 10:23:30.100239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.604 [2024-11-19 10:23:30.100253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.604 [2024-11-19 10:23:30.113915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.604 [2024-11-19 10:23:30.113951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.604 [2024-11-19 10:23:30.113965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.604 [2024-11-19 10:23:30.126741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.604 [2024-11-19 10:23:30.126778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.604 [2024-11-19 10:23:30.126792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.604 [2024-11-19 10:23:30.137954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.604 [2024-11-19 10:23:30.137991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.604 [2024-11-19 10:23:30.138004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.863 [2024-11-19 10:23:30.152784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.863 [2024-11-19 10:23:30.152834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.863 [2024-11-19 10:23:30.152849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.863 [2024-11-19 10:23:30.165785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.863 [2024-11-19 10:23:30.165836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.863 [2024-11-19 10:23:30.165851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.863 [2024-11-19 10:23:30.177732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.863 [2024-11-19 10:23:30.177769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.863 [2024-11-19 10:23:30.177783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.863 [2024-11-19 10:23:30.191919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.863 [2024-11-19 10:23:30.191956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.864 [2024-11-19 10:23:30.191969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.864 [2024-11-19 10:23:30.208385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.864 [2024-11-19 10:23:30.208424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.864 [2024-11-19 10:23:30.208438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.864 [2024-11-19 10:23:30.219094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.864 [2024-11-19 10:23:30.219131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.864 [2024-11-19 10:23:30.219145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.864 [2024-11-19 10:23:30.232773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.864 [2024-11-19 10:23:30.232814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.864 [2024-11-19 10:23:30.232841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.864 [2024-11-19 10:23:30.247308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.864 [2024-11-19 10:23:30.247347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.864 [2024-11-19 10:23:30.247362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.864 [2024-11-19 10:23:30.259836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.864 [2024-11-19 10:23:30.259871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.864 [2024-11-19 10:23:30.259885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.864 [2024-11-19 10:23:30.276062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.864 [2024-11-19 10:23:30.276099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.864 [2024-11-19 10:23:30.276114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.864 [2024-11-19 10:23:30.288242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.864 [2024-11-19 10:23:30.288281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.864 [2024-11-19 10:23:30.288295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.864 [2024-11-19 10:23:30.300935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.864 [2024-11-19 10:23:30.300973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.864 [2024-11-19 10:23:30.300987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.864 [2024-11-19 10:23:30.312573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.864 [2024-11-19 10:23:30.312611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.864 [2024-11-19 10:23:30.312625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.864 [2024-11-19 10:23:30.324787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.864 [2024-11-19 10:23:30.324834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.864 [2024-11-19 10:23:30.324850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.864 [2024-11-19 10:23:30.337434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.864 [2024-11-19 10:23:30.337471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.864 [2024-11-19 10:23:30.337485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.864 [2024-11-19 10:23:30.353066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.864 [2024-11-19 10:23:30.353104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.864 [2024-11-19 10:23:30.353118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.864 [2024-11-19 10:23:30.367995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.864 [2024-11-19 10:23:30.368031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.864 [2024-11-19 10:23:30.368046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.864 [2024-11-19 10:23:30.380190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.864 [2024-11-19 10:23:30.380226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.864 [2024-11-19 10:23:30.380239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:10.864 [2024-11-19 10:23:30.396397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:10.864 [2024-11-19 10:23:30.396436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.864 [2024-11-19 10:23:30.396451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.123 [2024-11-19 10:23:30.411204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.123 [2024-11-19 10:23:30.411241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.123 [2024-11-19 10:23:30.411255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.123 [2024-11-19 10:23:30.423608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.123 [2024-11-19 10:23:30.423646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.123 [2024-11-19 10:23:30.423659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.123 [2024-11-19 10:23:30.437366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.123 [2024-11-19 10:23:30.437406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.123 [2024-11-19 10:23:30.437421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.123 [2024-11-19 10:23:30.448607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.123 [2024-11-19 10:23:30.448645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.123 [2024-11-19 10:23:30.448659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.123 [2024-11-19 10:23:30.463639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.123 [2024-11-19 10:23:30.463678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.123 [2024-11-19 10:23:30.463692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.123 [2024-11-19 10:23:30.476764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.123 [2024-11-19 10:23:30.476802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.123 [2024-11-19 10:23:30.476816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.123 [2024-11-19 10:23:30.491097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.123 [2024-11-19 10:23:30.491135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.123 [2024-11-19 10:23:30.491149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.123 [2024-11-19 10:23:30.507336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.123 [2024-11-19 10:23:30.507375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.123 [2024-11-19 10:23:30.507389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.123 [2024-11-19 10:23:30.520926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.123 [2024-11-19 10:23:30.520963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.123 [2024-11-19 10:23:30.520977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.123 [2024-11-19 10:23:30.532524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.124 [2024-11-19 10:23:30.532561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.124 [2024-11-19 10:23:30.532575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.124 [2024-11-19 10:23:30.548543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.124 [2024-11-19 10:23:30.548583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.124 [2024-11-19 10:23:30.548598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.124 [2024-11-19 10:23:30.565152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.124 [2024-11-19 10:23:30.565192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.124 [2024-11-19 10:23:30.565206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.124 [2024-11-19 10:23:30.578740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.124 [2024-11-19 10:23:30.578778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.124 [2024-11-19 10:23:30.578792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.124 [2024-11-19 10:23:30.590394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.124 [2024-11-19 10:23:30.590432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.124 [2024-11-19 10:23:30.590445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.124 [2024-11-19 10:23:30.604733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.124 [2024-11-19 10:23:30.604773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.124 [2024-11-19 10:23:30.604788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.124 [2024-11-19 10:23:30.620515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.124 [2024-11-19 10:23:30.620553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.124 [2024-11-19 10:23:30.620566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.124 [2024-11-19 10:23:30.636019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.124 [2024-11-19 10:23:30.636055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.124 [2024-11-19 10:23:30.636069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.124 [2024-11-19 10:23:30.651032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.124 [2024-11-19 10:23:30.651068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.124 [2024-11-19 10:23:30.651081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.124 [2024-11-19 10:23:30.663716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.124 [2024-11-19 10:23:30.663752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.124 [2024-11-19 10:23:30.663766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.383 [2024-11-19 10:23:30.676903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.383 [2024-11-19 10:23:30.676938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.383 [2024-11-19 10:23:30.676953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.383 [2024-11-19 10:23:30.689905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.383 [2024-11-19 10:23:30.689940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.383 [2024-11-19 10:23:30.689954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.383 [2024-11-19 10:23:30.703526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.383 [2024-11-19 10:23:30.703564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.383 [2024-11-19 10:23:30.703579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.383 [2024-11-19 10:23:30.717842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.383 [2024-11-19 10:23:30.717877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.383 [2024-11-19 10:23:30.717891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.383 [2024-11-19 10:23:30.729351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.383 [2024-11-19 10:23:30.729388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.383 [2024-11-19 10:23:30.729402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.383 [2024-11-19 10:23:30.742463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.383 [2024-11-19 10:23:30.742499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.383 [2024-11-19 10:23:30.742513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.383 [2024-11-19 10:23:30.757839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.383 [2024-11-19 10:23:30.757875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.383 [2024-11-19 10:23:30.757889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.383 [2024-11-19 10:23:30.771736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.383 [2024-11-19 10:23:30.771775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.383 [2024-11-19 10:23:30.771789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.383 [2024-11-19 10:23:30.788970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.383 [2024-11-19 10:23:30.789011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.383 [2024-11-19 10:23:30.789025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.383 [2024-11-19 10:23:30.804961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.383 [2024-11-19 10:23:30.804999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.383 [2024-11-19 10:23:30.805013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.383 [2024-11-19 10:23:30.819315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.383 [2024-11-19 10:23:30.819352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.383 [2024-11-19 10:23:30.819366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.383 [2024-11-19 10:23:30.830463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.383 [2024-11-19 10:23:30.830501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.383 [2024-11-19 10:23:30.830515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.383 [2024-11-19 10:23:30.845516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.383 [2024-11-19 10:23:30.845554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.383 [2024-11-19 10:23:30.845567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.383 [2024-11-19 10:23:30.860880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.383 [2024-11-19 10:23:30.860920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.383 [2024-11-19 10:23:30.860934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.383 [2024-11-19 10:23:30.876493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.383 [2024-11-19 10:23:30.876532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.383 [2024-11-19 10:23:30.876546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.383 [2024-11-19 10:23:30.888858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.384 [2024-11-19 10:23:30.888895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.384 [2024-11-19 10:23:30.888910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.384 [2024-11-19 10:23:30.900770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.384 [2024-11-19 10:23:30.900808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.384 [2024-11-19 10:23:30.900833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.384 [2024-11-19 10:23:30.912343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.384 [2024-11-19 10:23:30.912380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.384 [2024-11-19 10:23:30.912394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.384 [2024-11-19 10:23:30.924676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.384 [2024-11-19 10:23:30.924715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.384 [2024-11-19 10:23:30.924729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.642 [2024-11-19 10:23:30.937011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.642 [2024-11-19 10:23:30.937048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.642 [2024-11-19 10:23:30.937062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.642 [2024-11-19 10:23:30.948260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.642 [2024-11-19 10:23:30.948295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.642 [2024-11-19 10:23:30.948310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.642 [2024-11-19 10:23:30.965244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.642 [2024-11-19 10:23:30.965284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.642 [2024-11-19 10:23:30.965298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.642 [2024-11-19 10:23:30.974133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa2a7f0) 00:23:11.642 [2024-11-19 10:23:30.974168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.642 [2024-11-19 10:23:30.974182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:11.642 00:23:11.642 Latency(us) 00:23:11.642 [2024-11-19T10:23:31.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.642 [2024-11-19T10:23:31.188Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:11.642 nvme0n1 : 2.00 18462.80 72.12 0.00 0.00 6925.73 2904.44 20852.36 00:23:11.642 [2024-11-19T10:23:31.188Z] =================================================================================================================== 00:23:11.642 [2024-11-19T10:23:31.188Z] Total : 18462.80 72.12 0.00 0.00 6925.73 2904.44 20852.36 00:23:11.642 0 00:23:11.642 10:23:30 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:11.642 10:23:30 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:11.642 10:23:30 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:11.642 | .driver_specific 00:23:11.642 | .nvme_error 00:23:11.642 | .status_code 00:23:11.642 | .command_transient_transport_error' 00:23:11.642 10:23:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:11.901 10:23:31 -- host/digest.sh@71 -- # (( 145 > 0 )) 00:23:11.901 10:23:31 -- host/digest.sh@73 -- # killprocess 97122 00:23:11.901 10:23:31 -- common/autotest_common.sh@936 -- # '[' -z 97122 ']' 00:23:11.901 10:23:31 -- common/autotest_common.sh@940 -- # kill -0 97122 00:23:11.901 10:23:31 -- common/autotest_common.sh@941 -- # uname 00:23:11.901 10:23:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:11.901 10:23:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97122 00:23:11.901 10:23:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:11.901 10:23:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:11.901 killing process with pid 97122 00:23:11.901 10:23:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97122' 00:23:11.901 10:23:31 -- common/autotest_common.sh@955 -- # kill 97122 00:23:11.901 Received shutdown signal, test time was about 2.000000 seconds 00:23:11.901 00:23:11.901 Latency(us) 00:23:11.901 [2024-11-19T10:23:31.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.901 [2024-11-19T10:23:31.447Z] =================================================================================================================== 00:23:11.901 [2024-11-19T10:23:31.447Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:11.901 10:23:31 -- common/autotest_common.sh@960 -- # wait 97122 00:23:12.171 10:23:31 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:23:12.171 10:23:31 -- host/digest.sh@54 -- # local rw bs qd 00:23:12.171 10:23:31 -- host/digest.sh@56 -- # rw=randread 00:23:12.171 10:23:31 -- host/digest.sh@56 -- # bs=131072 00:23:12.171 10:23:31 -- host/digest.sh@56 -- # qd=16 00:23:12.171 10:23:31 -- host/digest.sh@58 -- # bperfpid=97193 00:23:12.171 10:23:31 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:12.171 10:23:31 -- host/digest.sh@60 -- # waitforlisten 97193 /var/tmp/bperf.sock 00:23:12.171 10:23:31 -- common/autotest_common.sh@829 -- # '[' -z 97193 ']' 00:23:12.171 10:23:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:12.171 10:23:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:12.171 10:23:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:12.171 10:23:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.171 10:23:31 -- common/autotest_common.sh@10 -- # set +x 00:23:12.171 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:12.171 Zero copy mechanism will not be used. 00:23:12.171 [2024-11-19 10:23:31.519289] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:12.171 [2024-11-19 10:23:31.519387] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97193 ] 00:23:12.171 [2024-11-19 10:23:31.659344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.171 [2024-11-19 10:23:31.698021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.440 10:23:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.440 10:23:31 -- common/autotest_common.sh@862 -- # return 0 00:23:12.440 10:23:31 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:12.440 10:23:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:12.698 10:23:32 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:12.698 10:23:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.698 10:23:32 -- common/autotest_common.sh@10 -- # set +x 00:23:12.698 10:23:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.698 10:23:32 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:12.698 10:23:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:12.956 nvme0n1 00:23:12.956 10:23:32 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:12.956 10:23:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.956 10:23:32 -- common/autotest_common.sh@10 -- # set +x 00:23:12.956 10:23:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.956 10:23:32 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:12.956 10:23:32 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:13.216 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:13.216 Zero copy mechanism will not be used. 00:23:13.216 Running I/O for 2 seconds... 00:23:13.216 [2024-11-19 10:23:32.520340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.216 [2024-11-19 10:23:32.520392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.216 [2024-11-19 10:23:32.520407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.216 [2024-11-19 10:23:32.524513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.216 [2024-11-19 10:23:32.524551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.216 [2024-11-19 10:23:32.524565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.216 [2024-11-19 10:23:32.528354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.216 [2024-11-19 10:23:32.528391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.216 [2024-11-19 10:23:32.528404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.216 [2024-11-19 10:23:32.532710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.216 [2024-11-19 10:23:32.532750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.216 [2024-11-19 10:23:32.532765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.216 [2024-11-19 10:23:32.536618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.216 [2024-11-19 10:23:32.536657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.216 [2024-11-19 10:23:32.536670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.216 [2024-11-19 10:23:32.540752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.216 [2024-11-19 10:23:32.540790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.540812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.544543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.544582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.544596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.548319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.548356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.548369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.552148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.552186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.552200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.555817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.555868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.555881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.559602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.559649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.559662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.563865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.563902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.563915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.567923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.567961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.567974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.572120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.572157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.572171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.575627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.575663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.575676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.579282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.579321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.579335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.582939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.582977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.583006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.587137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.587175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.587189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.590884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.590921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.590934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.594638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.594675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.594688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.598911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.598947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.598961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.602834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.602869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.602882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.606730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.606768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.606780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.609995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.610035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.610048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.613466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.613503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.613517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.617720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.617758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.617771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.621336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.621374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.621387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.625269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.625306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.625319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.628977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.629015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.629028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.632441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.632481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.632495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.635974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.636011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.217 [2024-11-19 10:23:32.636024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.217 [2024-11-19 10:23:32.639575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.217 [2024-11-19 10:23:32.639612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.639625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.644060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.644097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.644111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.648346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.648384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.648397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.652592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.652631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.652645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.656336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.656373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.656386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.660532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.660578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.660592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.664119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.664158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.664172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.668359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.668401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.668415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.671902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.671940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.671953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.675591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.675629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.675642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.679437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.679476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.679489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.683607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.683646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.683659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.687556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.687594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.687608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.690939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.690976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.691010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.694711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.694748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.694761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.698334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.698371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.698384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.702555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.702593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.702606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.706183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.706221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.706234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.710483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.710519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.710532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.714785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.714837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.714853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.718269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.718305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.718318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.722000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.722037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.722049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.725531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.725568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.725582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.729286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.729324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.729337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.733061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.733098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.733111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.736972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.737009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.737022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.740341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.740379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.218 [2024-11-19 10:23:32.740392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.218 [2024-11-19 10:23:32.743999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.218 [2024-11-19 10:23:32.744037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.219 [2024-11-19 10:23:32.744049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.219 [2024-11-19 10:23:32.748292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.219 [2024-11-19 10:23:32.748329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.219 [2024-11-19 10:23:32.748343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.219 [2024-11-19 10:23:32.751806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.219 [2024-11-19 10:23:32.751855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.219 [2024-11-19 10:23:32.751868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.219 [2024-11-19 10:23:32.756073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.219 [2024-11-19 10:23:32.756111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.219 [2024-11-19 10:23:32.756125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.219 [2024-11-19 10:23:32.759689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.219 [2024-11-19 10:23:32.759728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.219 [2024-11-19 10:23:32.759741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.763621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.763658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.763671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.768297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.768334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.768348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.772104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.772142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.772155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.775458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.775496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.775508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.779874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.779912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.779924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.783746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.783783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.783795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.787146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.787183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.787196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.791453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.791490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.791503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.795210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.795247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.795261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.798728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.798770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.798783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.802307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.802344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.802357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.806445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.806483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.806496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.810140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.810177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.810190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.813679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.813715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.813728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.817847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.817884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.817897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.822039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.822076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.822088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.826199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.826237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.826249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.830475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.830512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.830525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.834430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.834468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.834482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.838832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.838867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.838880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.842709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.842746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.842759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.480 [2024-11-19 10:23:32.846496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.480 [2024-11-19 10:23:32.846533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.480 [2024-11-19 10:23:32.846547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.850492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.850529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.850542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.854722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.854759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.854772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.858689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.858726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.858739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.862335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.862372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.862385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.865696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.865733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.865747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.868708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.868744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.868756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.872633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.872669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.872682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.876567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.876603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.876616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.880644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.880680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.880693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.884907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.884943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.884956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.888124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.888162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.888175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.892154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.892192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.892205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.896177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.896214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.896228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.899642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.899679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.899692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.903246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.903283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.903296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.907171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.907208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.907221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.911136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.911172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.911185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.915067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.915104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.915117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.919183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.919219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.919232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.922868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.922904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.922916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.926436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.926472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.926485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.930001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.930037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.930050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.933059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.933097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.933110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.936966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.937009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.937030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.940885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.940919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.940932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.944850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.481 [2024-11-19 10:23:32.944885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.481 [2024-11-19 10:23:32.944898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.481 [2024-11-19 10:23:32.948862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:32.948897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:32.948909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:32.952778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:32.952815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:32.952841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:32.956731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:32.956768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:32.956780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:32.960961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:32.961004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:32.961025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:32.964731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:32.964771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:32.964784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:32.968708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:32.968745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:32.968758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:32.972854] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:32.972889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:32.972902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:32.976881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:32.976916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:32.976929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:32.980466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:32.980502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:32.980514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:32.984629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:32.984667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:32.984680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:32.988266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:32.988303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:32.988316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:32.992496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:32.992532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:32.992545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:32.996301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:32.996337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:32.996350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:32.999590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:32.999626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:32.999639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:33.003108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:33.003145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:33.003159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:33.006605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:33.006644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:33.006656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:33.010347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:33.010384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:33.010397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:33.014161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:33.014197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:33.014211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:33.018487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:33.018525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:33.018537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.482 [2024-11-19 10:23:33.022451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.482 [2024-11-19 10:23:33.022488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.482 [2024-11-19 10:23:33.022500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.742 [2024-11-19 10:23:33.026315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.742 [2024-11-19 10:23:33.026352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.742 [2024-11-19 10:23:33.026365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.742 [2024-11-19 10:23:33.029742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.742 [2024-11-19 10:23:33.029779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.742 [2024-11-19 10:23:33.029792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.742 [2024-11-19 10:23:33.033531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.742 [2024-11-19 10:23:33.033568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.742 [2024-11-19 10:23:33.033581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.742 [2024-11-19 10:23:33.037284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.742 [2024-11-19 10:23:33.037320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.742 [2024-11-19 10:23:33.037332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.742 [2024-11-19 10:23:33.040880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.742 [2024-11-19 10:23:33.040915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.742 [2024-11-19 10:23:33.040928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.742 [2024-11-19 10:23:33.044778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.742 [2024-11-19 10:23:33.044814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.742 [2024-11-19 10:23:33.044841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.742 [2024-11-19 10:23:33.048991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.742 [2024-11-19 10:23:33.049036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.742 [2024-11-19 10:23:33.049051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.742 [2024-11-19 10:23:33.052925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.742 [2024-11-19 10:23:33.052960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.742 [2024-11-19 10:23:33.052974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.742 [2024-11-19 10:23:33.055971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.742 [2024-11-19 10:23:33.056007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.742 [2024-11-19 10:23:33.056019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.742 [2024-11-19 10:23:33.060006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.742 [2024-11-19 10:23:33.060041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.060054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.064529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.064565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.064577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.068489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.068525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.068538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.072464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.072501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.072513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.076186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.076224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.076237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.079979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.080016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.080028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.083944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.083980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.083993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.086881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.086934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.086949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.091031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.091068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.091081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.095050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.095090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.095103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.098953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.098989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.099014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.102578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.102615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.102627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.105847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.105882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.105895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.109171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.109208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.109220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.112520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.112556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.112568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.116775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.116812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.116840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.120515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.120551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.120564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.124673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.124711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.124724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.128355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.128393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.128406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.132611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.132648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.132661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.136173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.136209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.743 [2024-11-19 10:23:33.136222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.743 [2024-11-19 10:23:33.140337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.743 [2024-11-19 10:23:33.140373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.140386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.143660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.143697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.143710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.147862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.147897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.147910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.151868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.151905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.151917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.155714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.155750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.155762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.159991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.160027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.160040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.164293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.164330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.164347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.168359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.168396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.168409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.172203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.172240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.172253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.176423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.176461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.176474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.180015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.180051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.180065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.183965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.184002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.184015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.187976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.188014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.188027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.192005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.192042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.192055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.196113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.196150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.196163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.200052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.200088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.200101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.203733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.203770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.203784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.208085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.208121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.208134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.212214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.212252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.212264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.216097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.216134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.216147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.220082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.220120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.744 [2024-11-19 10:23:33.220132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.744 [2024-11-19 10:23:33.223902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.744 [2024-11-19 10:23:33.223938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.745 [2024-11-19 10:23:33.223951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.745 [2024-11-19 10:23:33.228245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.745 [2024-11-19 10:23:33.228282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.745 [2024-11-19 10:23:33.228295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.745 [2024-11-19 10:23:33.232533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.745 [2024-11-19 10:23:33.232571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.745 [2024-11-19 10:23:33.232584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.745 [2024-11-19 10:23:33.236504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.745 [2024-11-19 10:23:33.236541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.745 [2024-11-19 10:23:33.236554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.745 [2024-11-19 10:23:33.240499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.745 [2024-11-19 10:23:33.240536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.745 [2024-11-19 10:23:33.240549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.745 [2024-11-19 10:23:33.243844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.745 [2024-11-19 10:23:33.243878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.745 [2024-11-19 10:23:33.243891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.745 [2024-11-19 10:23:33.247252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.745 [2024-11-19 10:23:33.247289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.745 [2024-11-19 10:23:33.247302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.745 [2024-11-19 10:23:33.251544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.745 [2024-11-19 10:23:33.251581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.745 [2024-11-19 10:23:33.251593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.745 [2024-11-19 10:23:33.255088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.745 [2024-11-19 10:23:33.255130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.745 [2024-11-19 10:23:33.255143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.745 [2024-11-19 10:23:33.258836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.745 [2024-11-19 10:23:33.258870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.745 [2024-11-19 10:23:33.258884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.745 [2024-11-19 10:23:33.262333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.745 [2024-11-19 10:23:33.262370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.745 [2024-11-19 10:23:33.262383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.745 [2024-11-19 10:23:33.266072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.745 [2024-11-19 10:23:33.266108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.745 [2024-11-19 10:23:33.266122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:13.745 [2024-11-19 10:23:33.270323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.745 [2024-11-19 10:23:33.270360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.745 [2024-11-19 10:23:33.270373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:13.745 [2024-11-19 10:23:33.274130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.745 [2024-11-19 10:23:33.274167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.745 [2024-11-19 10:23:33.274180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:13.745 [2024-11-19 10:23:33.278258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.745 [2024-11-19 10:23:33.278295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.745 [2024-11-19 10:23:33.278308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:13.745 [2024-11-19 10:23:33.282269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:13.745 [2024-11-19 10:23:33.282305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:13.745 [2024-11-19 10:23:33.282318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.286252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.286290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.286303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.289718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.289755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.289767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.293431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.293467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.293480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.297251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.297289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.297302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.301126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.301162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.301176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.304800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.304845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.304859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.308799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.308845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.308859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.312650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.312687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.312700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.316465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.316501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.316514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.320504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.320540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.320553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.324639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.324676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.324688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.327935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.327971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.327983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.331976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.332013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.332026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.336152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.336189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.336201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.339795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.339844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.339857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.344066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.344103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.344116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.347727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.347763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.347776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.350784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.350835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.350850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.005 [2024-11-19 10:23:33.354722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.005 [2024-11-19 10:23:33.354758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.005 [2024-11-19 10:23:33.354771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.358320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.358357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.358369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.363016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.363052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.363065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.367072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.367107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.367121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.369917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.369952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.369964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.373602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.373639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.373652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.377257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.377295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.377308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.380888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.380924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.380936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.384519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.384556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.384570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.388291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.388327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.388341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.392227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.392262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.392275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.396226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.396262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.396275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.400082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.400119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.400131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.404062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.404099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.404111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.407959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.407995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.408007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.411782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.411832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.411848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.415633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.415670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.415683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.419216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.419254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.419267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.422902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.422939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.422951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.426440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.426476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.426489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.429971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.430008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.430020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.434195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.434232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.434245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.437929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.437965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.437978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.441756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.441794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.441807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.445728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.445764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.445777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.450103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.450141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.450154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.454061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.454098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.006 [2024-11-19 10:23:33.454111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.006 [2024-11-19 10:23:33.458092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.006 [2024-11-19 10:23:33.458130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.458142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.461626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.461663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.461676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.465652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.465690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.465703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.469580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.469618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.469631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.473561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.473598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.473610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.477346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.477383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.477396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.481254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.481291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.481305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.485215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.485252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.485265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.489327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.489365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.489377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.492968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.493005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.493017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.496094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.496130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.496143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.500368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.500405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.500418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.504036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.504072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.504085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.507735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.507771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.507783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.511978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.512013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.512027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.515340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.515377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.515389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.518812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.518861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.518875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.522407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.522442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.522455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.526632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.526670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.526683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.530718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.530755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.530768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.534045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.534081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.534094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.537776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.537812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.537838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.542172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.542211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.542224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.007 [2024-11-19 10:23:33.545477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.007 [2024-11-19 10:23:33.545513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.007 [2024-11-19 10:23:33.545525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.548782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.548834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.548849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.551748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.551784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.551797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.555949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.555986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.555999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.560032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.560069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.560082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.564031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.564067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.564080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.567547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.567583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.567595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.571956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.571992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.572005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.576032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.576070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.576083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.579942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.579978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.579991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.583962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.583998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.584011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.587323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.587359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.587372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.590360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.590396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.590409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.594515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.594553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.594566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.598214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.598251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.598264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.601666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.601703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.601716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.605324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.605360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.605373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.609007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.609044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.609057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.612431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.612469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.612481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.616551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.616588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.616601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.620252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.620289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.620302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.624056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.624093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.624106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.627489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.627526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.627539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.631720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.631758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.631771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.635597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.635634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.635647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.639443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.639480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.639493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.643072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.643109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.643123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.647158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.647194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.647207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.651320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.651358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.651370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.654490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.654525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.654539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.658625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.658661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.658674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.662370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.662406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.662419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.665869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.665904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.665916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.668902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.668938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.668950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.673137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.673173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.673186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.676635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.676671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.676684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.680673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.680709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.680722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.684430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.684467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.684481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.687957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.687993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.688006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.691976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.692013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.692026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.695464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.695501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.695514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.699082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.699118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.699131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.702655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.702690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.702703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.706309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.706346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.706359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.709785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.266 [2024-11-19 10:23:33.709836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.266 [2024-11-19 10:23:33.709851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.266 [2024-11-19 10:23:33.713637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.713674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.713687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.717725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.717762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.717775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.721179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.721215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.721228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.724967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.725002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.725015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.728947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.728982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.728995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.732750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.732788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.732801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.737027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.737064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.737076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.740839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.740874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.740887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.744366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.744403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.744415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.747637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.747674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.747686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.751022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.751061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.751074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.754921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.754956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.754968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.758638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.758674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.758688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.762858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.762893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.762905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.766694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.766734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.766747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.769973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.770008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.770022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.773063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.773099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.773112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.776640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.776677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.776690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.780914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.780951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.780964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.784492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.784530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.784543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.788029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.788067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.788079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.791873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.791910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.791923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.796007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.796048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.796062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.799403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.799439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.799451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.803198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.803235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.803248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.267 [2024-11-19 10:23:33.807520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.267 [2024-11-19 10:23:33.807557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.267 [2024-11-19 10:23:33.807571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.810782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.810831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.810845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.814914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.814970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.814983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.818892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.818928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.818941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.822606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.822642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.822655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.826880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.826917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.826929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.830800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.830847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.830861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.834756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.834793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.834806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.838561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.838598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.838610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.841994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.842031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.842044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.845667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.845704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.845718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.849053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.849091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.849105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.852655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.852693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.852706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.856727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.856765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.856778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.860093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.860130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.860143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.863758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.863793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.863806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.867225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.867263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.867276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.871229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.871267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.871280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.875515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.527 [2024-11-19 10:23:33.875552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.527 [2024-11-19 10:23:33.875564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.527 [2024-11-19 10:23:33.880136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.880173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.880187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.884197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.884238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.884258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.888291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.888330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.888343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.892171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.892208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.892222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.895794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.895847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.895860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.899635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.899673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.899687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.903242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.903279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.903292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.907340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.907377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.907390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.910588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.910624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.910637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.913881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.913918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.913931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.917505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.917541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.917555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.921454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.921493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.921506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.925036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.925073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.925086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.928637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.928673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.928686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.933228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.933265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.933278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.936668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.936705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.936719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.940165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.940202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.940215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.943860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.943896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.943908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.947468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.947505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.947518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.951120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.951157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.951171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.955645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.955682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.955694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.958964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.959011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.959025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.963284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.963322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.963335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.967301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.967339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.967352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.970863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.970900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.970912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.974685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.974723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.974736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.978344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.978381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.978394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.982005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.982041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.982054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.986191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.986230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.986243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.990442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.990480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.990493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.994081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.994117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.994129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:33.997749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:33.997786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:33.997799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:34.001859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:34.001895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:34.001907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:34.006127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:34.006166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:34.006179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:34.010101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:34.010138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:34.010151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:34.013481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:34.013519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:34.013531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:34.016705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:34.016750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:34.016763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:34.020652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:34.020688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:34.020701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:34.025413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:34.025452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:34.025465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:34.029556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:34.029593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:34.029606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:34.033260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:34.033298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:34.033311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:34.036703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:34.036740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:34.036752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:34.040478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:34.040514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:34.040527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:34.044757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:34.044794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:34.044807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:34.049781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:34.049838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:34.049854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:34.055256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:34.055316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:34.055330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:34.061358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:34.061411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:34.061430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.528 [2024-11-19 10:23:34.067100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.528 [2024-11-19 10:23:34.067164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.528 [2024-11-19 10:23:34.067185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.072900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.072942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.072956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.079149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.079207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.079227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.084762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.084815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.084850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.091141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.091199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.091219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.096803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.096870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.096890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.103196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.103255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.103284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.107995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.108048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.108068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.112499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.112549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.112568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.118598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.118650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.118665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.124619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.124685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.124705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.128697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.128746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.128766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.135026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.135081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.135101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.140356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.140409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.140427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.145096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.145145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.145164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.150310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.150362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.150380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.155779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.155844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.155864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.161932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.161984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.162006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.167399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.167448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.167467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.789 [2024-11-19 10:23:34.172073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.789 [2024-11-19 10:23:34.172113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.789 [2024-11-19 10:23:34.172127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.175890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.175925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.175938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.179693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.179730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.179743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.183659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.183695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.183709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.187520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.187556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.187568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.191788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.191838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.191853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.195259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.195296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.195309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.199016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.199051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.199064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.203198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.203234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.203247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.207226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.207261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.207274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.211228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.211263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.211275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.215156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.215192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.215204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.218403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.218438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.218450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.222354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.222388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.222401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.226081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.226116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.226129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.229598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.229634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.229646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.233265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.233303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.233316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.237633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.237669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.237682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.241737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.241772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.241785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.245569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.245604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.245617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.248031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.248065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.248077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.252224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.252260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.252272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.256344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.256380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.256393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.259968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.260003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.260016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.263489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.263525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.263538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.267112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.267147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.267160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.270567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.270602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.790 [2024-11-19 10:23:34.270614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.790 [2024-11-19 10:23:34.274287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.790 [2024-11-19 10:23:34.274325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.791 [2024-11-19 10:23:34.274338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.791 [2024-11-19 10:23:34.278144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.791 [2024-11-19 10:23:34.278181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.791 [2024-11-19 10:23:34.278193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.791 [2024-11-19 10:23:34.281571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.791 [2024-11-19 10:23:34.281607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.791 [2024-11-19 10:23:34.281620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.791 [2024-11-19 10:23:34.285804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.791 [2024-11-19 10:23:34.285849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.791 [2024-11-19 10:23:34.285863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.791 [2024-11-19 10:23:34.289073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.791 [2024-11-19 10:23:34.289108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.791 [2024-11-19 10:23:34.289122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.791 [2024-11-19 10:23:34.292901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.791 [2024-11-19 10:23:34.292936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.791 [2024-11-19 10:23:34.292948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.791 [2024-11-19 10:23:34.297323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.791 [2024-11-19 10:23:34.297359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.791 [2024-11-19 10:23:34.297372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.791 [2024-11-19 10:23:34.301170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.791 [2024-11-19 10:23:34.301205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.791 [2024-11-19 10:23:34.301218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.791 [2024-11-19 10:23:34.305297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.791 [2024-11-19 10:23:34.305333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.791 [2024-11-19 10:23:34.305346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.791 [2024-11-19 10:23:34.308729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.791 [2024-11-19 10:23:34.308764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.791 [2024-11-19 10:23:34.308777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.791 [2024-11-19 10:23:34.312790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.791 [2024-11-19 10:23:34.312839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.791 [2024-11-19 10:23:34.312852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.791 [2024-11-19 10:23:34.316348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.791 [2024-11-19 10:23:34.316384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.791 [2024-11-19 10:23:34.316397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:14.791 [2024-11-19 10:23:34.320226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.791 [2024-11-19 10:23:34.320261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.791 [2024-11-19 10:23:34.320273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:14.791 [2024-11-19 10:23:34.324497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.791 [2024-11-19 10:23:34.324533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.791 [2024-11-19 10:23:34.324545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:14.791 [2024-11-19 10:23:34.327789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.791 [2024-11-19 10:23:34.327836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.791 [2024-11-19 10:23:34.327850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:14.791 [2024-11-19 10:23:34.331686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:14.791 [2024-11-19 10:23:34.331722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.791 [2024-11-19 10:23:34.331734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.050 [2024-11-19 10:23:34.335022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.050 [2024-11-19 10:23:34.335056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.335069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.338600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.338636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.338648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.342603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.342638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.342651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.345974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.346009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.346022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.349887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.349920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.349934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.353574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.353608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.353620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.357564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.357598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.357611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.361797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.361845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.361858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.365383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.365419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.365431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.369194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.369230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.369243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.373278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.373313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.373326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.376556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.376591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.376605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.380274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.380309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.380321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.384939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.384973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.384986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.388207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.388246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.388259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.392108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.392144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.392157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.396565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.396608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.396620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.399924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.399959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.399971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.404082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.404117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.404130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.407564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.407599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.407611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.411409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.411443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.411455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.414936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.414970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.414982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.419398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.419433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.419446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.422805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.422850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.422863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.426049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.426083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.426096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.430028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.430062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.430074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.434346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.434380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.434393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.439018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.439057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.439070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.442800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.442844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.442857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.446842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.446875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.446887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.450450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.450485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.450498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.454648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.454684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.454696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.458088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.458124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.458136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.461879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.461914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.461927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.466142] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.466178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.466191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.470000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.470036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.470048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.473493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.473529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.473541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.477931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.477968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.477981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.482002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.482037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.482049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.485697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.485735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.485748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.489074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.489109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.489121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.492974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.493010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.493022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.496909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.496945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.496957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.500989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.501025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.501037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.505351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.505390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.505403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.051 [2024-11-19 10:23:34.509788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e974a0) 00:23:15.051 [2024-11-19 10:23:34.509837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.051 [2024-11-19 10:23:34.509851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.051 00:23:15.051 Latency(us) 00:23:15.051 [2024-11-19T10:23:34.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.051 [2024-11-19T10:23:34.597Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:15.051 nvme0n1 : 2.00 7903.68 987.96 0.00 0.00 2020.43 603.23 9472.93 00:23:15.051 [2024-11-19T10:23:34.597Z] =================================================================================================================== 00:23:15.051 [2024-11-19T10:23:34.597Z] Total : 7903.68 987.96 0.00 0.00 2020.43 603.23 9472.93 00:23:15.051 0 00:23:15.051 10:23:34 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:15.051 10:23:34 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:15.051 10:23:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:15.051 10:23:34 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:15.051 | .driver_specific 00:23:15.051 | .nvme_error 00:23:15.051 | .status_code 00:23:15.051 | .command_transient_transport_error' 00:23:15.310 10:23:34 -- host/digest.sh@71 -- # (( 510 > 0 )) 00:23:15.310 10:23:34 -- host/digest.sh@73 -- # killprocess 97193 00:23:15.310 10:23:34 -- common/autotest_common.sh@936 -- # '[' -z 97193 ']' 00:23:15.310 10:23:34 -- common/autotest_common.sh@940 -- # kill -0 97193 00:23:15.310 10:23:34 -- common/autotest_common.sh@941 -- # uname 00:23:15.310 10:23:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:15.310 10:23:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97193 00:23:15.569 10:23:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:15.569 10:23:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:15.569 10:23:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97193' 00:23:15.569 killing process with pid 97193 00:23:15.569 Received shutdown signal, test time was about 2.000000 seconds 00:23:15.569 00:23:15.569 Latency(us) 00:23:15.569 [2024-11-19T10:23:35.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.569 [2024-11-19T10:23:35.115Z] =================================================================================================================== 00:23:15.569 [2024-11-19T10:23:35.115Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:15.569 10:23:34 -- common/autotest_common.sh@955 -- # kill 97193 00:23:15.569 10:23:34 -- common/autotest_common.sh@960 -- # wait 97193 00:23:15.569 10:23:35 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:23:15.569 10:23:35 -- host/digest.sh@54 -- # local rw bs qd 00:23:15.569 10:23:35 -- host/digest.sh@56 -- # rw=randwrite 00:23:15.569 10:23:35 -- host/digest.sh@56 -- # bs=4096 00:23:15.569 10:23:35 -- host/digest.sh@56 -- # qd=128 00:23:15.569 10:23:35 -- host/digest.sh@58 -- # bperfpid=97270 00:23:15.569 10:23:35 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:15.569 10:23:35 -- host/digest.sh@60 -- # waitforlisten 97270 /var/tmp/bperf.sock 00:23:15.569 10:23:35 -- common/autotest_common.sh@829 -- # '[' -z 97270 ']' 00:23:15.569 10:23:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:15.569 10:23:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:15.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:15.569 10:23:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:15.569 10:23:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:15.569 10:23:35 -- common/autotest_common.sh@10 -- # set +x 00:23:15.569 [2024-11-19 10:23:35.056840] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:15.569 [2024-11-19 10:23:35.056952] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97270 ] 00:23:15.827 [2024-11-19 10:23:35.198771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.827 [2024-11-19 10:23:35.239797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.827 10:23:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.827 10:23:35 -- common/autotest_common.sh@862 -- # return 0 00:23:15.827 10:23:35 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:15.827 10:23:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:16.395 10:23:35 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:16.395 10:23:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.395 10:23:35 -- common/autotest_common.sh@10 -- # set +x 00:23:16.395 10:23:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.395 10:23:35 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:16.395 10:23:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:16.653 nvme0n1 00:23:16.653 10:23:35 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:16.653 10:23:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.653 10:23:35 -- common/autotest_common.sh@10 -- # set +x 00:23:16.653 10:23:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.653 10:23:35 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:16.653 10:23:36 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:16.653 Running I/O for 2 seconds... 00:23:16.653 [2024-11-19 10:23:36.119853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190eea00 00:23:16.653 [2024-11-19 10:23:36.121156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.653 [2024-11-19 10:23:36.121204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.653 [2024-11-19 10:23:36.130564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e6fa8 00:23:16.653 [2024-11-19 10:23:36.131538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.653 [2024-11-19 10:23:36.131586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:16.653 [2024-11-19 10:23:36.144005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ed920 00:23:16.653 [2024-11-19 10:23:36.144999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.653 [2024-11-19 10:23:36.145044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:16.653 [2024-11-19 10:23:36.159116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e6b70 00:23:16.653 [2024-11-19 10:23:36.160518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.653 [2024-11-19 10:23:36.160568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:16.653 [2024-11-19 10:23:36.172148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ee5c8 00:23:16.653 [2024-11-19 10:23:36.172747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.653 [2024-11-19 10:23:36.172790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:16.653 [2024-11-19 10:23:36.186543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f0bc0 00:23:16.653 [2024-11-19 10:23:36.187876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.653 [2024-11-19 10:23:36.187916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:16.653 [2024-11-19 10:23:36.195127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e0a68 00:23:16.653 [2024-11-19 10:23:36.195456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.653 [2024-11-19 10:23:36.195496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.209529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f6890 00:23:16.913 [2024-11-19 10:23:36.210547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.210588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.220022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f7da8 00:23:16.913 [2024-11-19 10:23:36.221214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.221254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.231527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e6b70 00:23:16.913 [2024-11-19 10:23:36.232095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.232134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.246187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ea680 00:23:16.913 [2024-11-19 10:23:36.247479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.247547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.256063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ebb98 00:23:16.913 [2024-11-19 10:23:36.256354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.256387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.270858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e1710 00:23:16.913 [2024-11-19 10:23:36.271497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.271555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.280895] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f7100 00:23:16.913 [2024-11-19 10:23:36.282387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.282451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.296116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e0630 00:23:16.913 [2024-11-19 10:23:36.297000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.297053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.308022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ebb98 00:23:16.913 [2024-11-19 10:23:36.309718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.309772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.321570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f6cc8 00:23:16.913 [2024-11-19 10:23:36.322172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.322227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.337341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f3e60 00:23:16.913 [2024-11-19 10:23:36.338625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.338671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.345631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f5378 00:23:16.913 [2024-11-19 10:23:36.346862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.346898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.359540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fac10 00:23:16.913 [2024-11-19 10:23:36.360335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.360372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.372979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ec408 00:23:16.913 [2024-11-19 10:23:36.374282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.374321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.381581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fa3a0 00:23:16.913 [2024-11-19 10:23:36.381936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.381971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.395529] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fdeb0 00:23:16.913 [2024-11-19 10:23:36.397228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.397268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.405780] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f1868 00:23:16.913 [2024-11-19 10:23:36.406726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.406765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.417163] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f1868 00:23:16.913 [2024-11-19 10:23:36.418612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.418655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.430888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190eff18 00:23:16.913 [2024-11-19 10:23:36.431945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.431986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.441295] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e73e0 00:23:16.913 [2024-11-19 10:23:36.442503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.442543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:16.913 [2024-11-19 10:23:36.452191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f6cc8 00:23:16.913 [2024-11-19 10:23:36.452423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:16.913 [2024-11-19 10:23:36.452450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.464204] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f6cc8 00:23:17.173 [2024-11-19 10:23:36.464930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.464970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.475816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190eea00 00:23:17.173 [2024-11-19 10:23:36.476060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.476099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.488062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e6300 00:23:17.173 [2024-11-19 10:23:36.488950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.488992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.502112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fef90 00:23:17.173 [2024-11-19 10:23:36.503513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.503556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.516601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f2948 00:23:17.173 [2024-11-19 10:23:36.517987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.518023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.524899] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fc998 00:23:17.173 [2024-11-19 10:23:36.525960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.525998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.536649] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e88f8 00:23:17.173 [2024-11-19 10:23:36.536869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.536893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.548433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f0350 00:23:17.173 [2024-11-19 10:23:36.548875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.548914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.559955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e4140 00:23:17.173 [2024-11-19 10:23:36.560146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.560168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.574795] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f5378 00:23:17.173 [2024-11-19 10:23:36.576172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.576210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.582962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e5220 00:23:17.173 [2024-11-19 10:23:36.584036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.584075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.594888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f35f0 00:23:17.173 [2024-11-19 10:23:36.595333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.595373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.609924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f57b0 00:23:17.173 [2024-11-19 10:23:36.611338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.611377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.618490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f2510 00:23:17.173 [2024-11-19 10:23:36.618903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.618941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.632919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e0630 00:23:17.173 [2024-11-19 10:23:36.634031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.634079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.641534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e4140 00:23:17.173 [2024-11-19 10:23:36.641668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.641691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.654330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ebb98 00:23:17.173 [2024-11-19 10:23:36.654636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.654674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.666174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e49b0 00:23:17.173 [2024-11-19 10:23:36.666947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.666988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.678051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f6458 00:23:17.173 [2024-11-19 10:23:36.678397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.678436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.691712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fbcf0 00:23:17.173 [2024-11-19 10:23:36.692915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.692955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:17.173 [2024-11-19 10:23:36.705092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e6738 00:23:17.173 [2024-11-19 10:23:36.706273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.173 [2024-11-19 10:23:36.706313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.718883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e9e10 00:23:17.433 [2024-11-19 10:23:36.719903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.719946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.732542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e9e10 00:23:17.433 [2024-11-19 10:23:36.733608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.733652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.747591] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e9e10 00:23:17.433 [2024-11-19 10:23:36.749432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.749475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.760783] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ed920 00:23:17.433 [2024-11-19 10:23:36.761628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.761668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.774840] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e8d30 00:23:17.433 [2024-11-19 10:23:36.776403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.776446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.787979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190eb328 00:23:17.433 [2024-11-19 10:23:36.788769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.788810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.803191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e4578 00:23:17.433 [2024-11-19 10:23:36.804722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.804787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.815116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e3498 00:23:17.433 [2024-11-19 10:23:36.815925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.815985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.827406] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e6b70 00:23:17.433 [2024-11-19 10:23:36.827658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.827703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.842278] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e6738 00:23:17.433 [2024-11-19 10:23:36.843388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.843432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.855983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fef90 00:23:17.433 [2024-11-19 10:23:36.857311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.857354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.869065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e88f8 00:23:17.433 [2024-11-19 10:23:36.869452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.869492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.884418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e6b70 00:23:17.433 [2024-11-19 10:23:36.885237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.885279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.897682] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f3e60 00:23:17.433 [2024-11-19 10:23:36.899170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.899222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.910665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f8e88 00:23:17.433 [2024-11-19 10:23:36.911432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.911469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.925318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e3060 00:23:17.433 [2024-11-19 10:23:36.927132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.927174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.939330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f81e0 00:23:17.433 [2024-11-19 10:23:36.940639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.940688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.950104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f3a28 00:23:17.433 [2024-11-19 10:23:36.951137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.951181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.963550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190feb58 00:23:17.433 [2024-11-19 10:23:36.965039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.965078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:17.433 [2024-11-19 10:23:36.976353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190df988 00:23:17.433 [2024-11-19 10:23:36.977427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.433 [2024-11-19 10:23:36.977469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:17.692 [2024-11-19 10:23:36.989580] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f1868 00:23:17.692 [2024-11-19 10:23:36.990395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.692 [2024-11-19 10:23:36.990434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:17.692 [2024-11-19 10:23:37.002572] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f2510 00:23:17.692 [2024-11-19 10:23:37.002693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.692 [2024-11-19 10:23:37.002716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:17.692 [2024-11-19 10:23:37.019618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e0630 00:23:17.692 [2024-11-19 10:23:37.020531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.692 [2024-11-19 10:23:37.020574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:17.692 [2024-11-19 10:23:37.033372] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ed920 00:23:17.692 [2024-11-19 10:23:37.035063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.692 [2024-11-19 10:23:37.035103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:17.692 [2024-11-19 10:23:37.045766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f1430 00:23:17.692 [2024-11-19 10:23:37.047469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.692 [2024-11-19 10:23:37.047509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:17.692 [2024-11-19 10:23:37.057312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fa3a0 00:23:17.693 [2024-11-19 10:23:37.058989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.693 [2024-11-19 10:23:37.059039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:17.693 [2024-11-19 10:23:37.068894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e01f8 00:23:17.693 [2024-11-19 10:23:37.070595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.693 [2024-11-19 10:23:37.070639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:17.693 [2024-11-19 10:23:37.080432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e6fa8 00:23:17.693 [2024-11-19 10:23:37.081846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.693 [2024-11-19 10:23:37.081889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:17.693 [2024-11-19 10:23:37.091813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e3498 00:23:17.693 [2024-11-19 10:23:37.092563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.693 [2024-11-19 10:23:37.092602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:17.693 [2024-11-19 10:23:37.101945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f6020 00:23:17.693 [2024-11-19 10:23:37.102183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.693 [2024-11-19 10:23:37.102211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:17.693 [2024-11-19 10:23:37.115932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ea248 00:23:17.693 [2024-11-19 10:23:37.117525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.693 [2024-11-19 10:23:37.117566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:17.693 [2024-11-19 10:23:37.126124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190eee38 00:23:17.693 [2024-11-19 10:23:37.127015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.693 [2024-11-19 10:23:37.127055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:17.693 [2024-11-19 10:23:37.139243] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ecc78 00:23:17.693 [2024-11-19 10:23:37.140152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.693 [2024-11-19 10:23:37.140190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.693 [2024-11-19 10:23:37.150750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e3498 00:23:17.693 [2024-11-19 10:23:37.151626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.693 [2024-11-19 10:23:37.151666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.693 [2024-11-19 10:23:37.162258] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190eee38 00:23:17.693 [2024-11-19 10:23:37.163125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.693 [2024-11-19 10:23:37.163165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:17.693 [2024-11-19 10:23:37.173831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e6300 00:23:17.693 [2024-11-19 10:23:37.174617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.693 [2024-11-19 10:23:37.174655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:17.693 [2024-11-19 10:23:37.185355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e9168 00:23:17.693 [2024-11-19 10:23:37.186128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:10936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.693 [2024-11-19 10:23:37.186168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:17.693 [2024-11-19 10:23:37.196904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fb048 00:23:17.693 [2024-11-19 10:23:37.197629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.693 [2024-11-19 10:23:37.197668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:17.693 [2024-11-19 10:23:37.208445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190dfdc0 00:23:17.693 [2024-11-19 10:23:37.209160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.693 [2024-11-19 10:23:37.209198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:17.693 [2024-11-19 10:23:37.219699] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f4b08 00:23:17.693 [2024-11-19 10:23:37.221160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.693 [2024-11-19 10:23:37.221212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:17.693 [2024-11-19 10:23:37.231296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fc998 00:23:17.693 [2024-11-19 10:23:37.231830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.693 [2024-11-19 10:23:37.231873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.245764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e49b0 00:23:17.953 [2024-11-19 10:23:37.247149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.247212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.254939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fe720 00:23:17.953 [2024-11-19 10:23:37.255320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.255375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.270585] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e5658 00:23:17.953 [2024-11-19 10:23:37.271663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.271719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.279757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fa7d8 00:23:17.953 [2024-11-19 10:23:37.279861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.279894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.295388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190edd58 00:23:17.953 [2024-11-19 10:23:37.296183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.296238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.307552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ebb98 00:23:17.953 [2024-11-19 10:23:37.308958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.309023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.319059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e1710 00:23:17.953 [2024-11-19 10:23:37.320614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.320657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.331168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fd208 00:23:17.953 [2024-11-19 10:23:37.332628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.332668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.342835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190eaab8 00:23:17.953 [2024-11-19 10:23:37.344332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.344373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.354518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190dfdc0 00:23:17.953 [2024-11-19 10:23:37.356044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.356086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.366210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f7970 00:23:17.953 [2024-11-19 10:23:37.367719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.367759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.378096] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e1710 00:23:17.953 [2024-11-19 10:23:37.379699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.379739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.389764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e8d30 00:23:17.953 [2024-11-19 10:23:37.390504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.390541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.399909] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f7970 00:23:17.953 [2024-11-19 10:23:37.400669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.400707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.412017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e5658 00:23:17.953 [2024-11-19 10:23:37.412721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.412760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.423556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e5a90 00:23:17.953 [2024-11-19 10:23:37.424216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.424253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.435136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fa3a0 00:23:17.953 [2024-11-19 10:23:37.435785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.435832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.446664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f81e0 00:23:17.953 [2024-11-19 10:23:37.447362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.447401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.458266] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f0bc0 00:23:17.953 [2024-11-19 10:23:37.458896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.458933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.469741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fdeb0 00:23:17.953 [2024-11-19 10:23:37.470457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.470495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.481586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e2c28 00:23:17.953 [2024-11-19 10:23:37.482786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.482847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:17.953 [2024-11-19 10:23:37.493379] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f2510 00:23:17.953 [2024-11-19 10:23:37.493907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.953 [2024-11-19 10:23:37.493947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:18.212 [2024-11-19 10:23:37.507789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f7970 00:23:18.212 [2024-11-19 10:23:37.509019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.212 [2024-11-19 10:23:37.509060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:18.212 [2024-11-19 10:23:37.516365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f7da8 00:23:18.212 [2024-11-19 10:23:37.516610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.212 [2024-11-19 10:23:37.516641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:18.212 [2024-11-19 10:23:37.529923] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e27f0 00:23:18.212 [2024-11-19 10:23:37.530692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.212 [2024-11-19 10:23:37.530733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:18.212 [2024-11-19 10:23:37.541790] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f6020 00:23:18.212 [2024-11-19 10:23:37.542792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.212 [2024-11-19 10:23:37.542840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.553874] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ea248 00:23:18.213 [2024-11-19 10:23:37.554605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.554643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.564136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190df550 00:23:18.213 [2024-11-19 10:23:37.565160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.565199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.575121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fd208 00:23:18.213 [2024-11-19 10:23:37.576228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.576267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.586712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fc128 00:23:18.213 [2024-11-19 10:23:37.587617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.587657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.598359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f8e88 00:23:18.213 [2024-11-19 10:23:37.599273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.599311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.611021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fd208 00:23:18.213 [2024-11-19 10:23:37.611596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.611632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.622898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e95a0 00:23:18.213 [2024-11-19 10:23:37.623506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.623544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.633111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f81e0 00:23:18.213 [2024-11-19 10:23:37.633231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.633253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.647156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e9e10 00:23:18.213 [2024-11-19 10:23:37.648890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.648930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.661047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ed920 00:23:18.213 [2024-11-19 10:23:37.662368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.662405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.669662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f1868 00:23:18.213 [2024-11-19 10:23:37.670012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.670049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.683185] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fdeb0 00:23:18.213 [2024-11-19 10:23:37.684071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.684112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.694224] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e6b70 00:23:18.213 [2024-11-19 10:23:37.695646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.695691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.706355] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f81e0 00:23:18.213 [2024-11-19 10:23:37.706977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.707029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.719945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f4298 00:23:18.213 [2024-11-19 10:23:37.721043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.721081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.728566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190eea00 00:23:18.213 [2024-11-19 10:23:37.728681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.728706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.742770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e7c50 00:23:18.213 [2024-11-19 10:23:37.743439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.743477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:18.213 [2024-11-19 10:23:37.754470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f2948 00:23:18.213 [2024-11-19 10:23:37.756192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.213 [2024-11-19 10:23:37.756232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.768359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190dfdc0 00:23:18.472 [2024-11-19 10:23:37.769671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.769709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.776921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f20d8 00:23:18.472 [2024-11-19 10:23:37.777257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.777300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.789687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fc560 00:23:18.472 [2024-11-19 10:23:37.790220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.790258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.801299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f6890 00:23:18.472 [2024-11-19 10:23:37.802645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.802684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.813011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f20d8 00:23:18.472 [2024-11-19 10:23:37.813509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.813547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.824568] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ec840 00:23:18.472 [2024-11-19 10:23:37.825080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.825124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.836374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190de038 00:23:18.472 [2024-11-19 10:23:37.837113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.837151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.849545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f4f40 00:23:18.472 [2024-11-19 10:23:37.850911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.850947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.859160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e5a90 00:23:18.472 [2024-11-19 10:23:37.860108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.860146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.872300] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190eb328 00:23:18.472 [2024-11-19 10:23:37.873318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.873355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.882536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f8a50 00:23:18.472 [2024-11-19 10:23:37.883610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.883648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.894110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e23b8 00:23:18.472 [2024-11-19 10:23:37.895587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.895626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.905757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e49b0 00:23:18.472 [2024-11-19 10:23:37.906512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.906550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.917316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f8a50 00:23:18.472 [2024-11-19 10:23:37.917970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.918007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.928847] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ebfd0 00:23:18.472 [2024-11-19 10:23:37.929440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.929477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.940367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ef6a8 00:23:18.472 [2024-11-19 10:23:37.941014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.941050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.951860] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e5658 00:23:18.472 [2024-11-19 10:23:37.952445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.952483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.963383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190ed920 00:23:18.472 [2024-11-19 10:23:37.963967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.472 [2024-11-19 10:23:37.964005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:18.472 [2024-11-19 10:23:37.974870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f4b08 00:23:18.472 [2024-11-19 10:23:37.975648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.473 [2024-11-19 10:23:37.975694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:18.473 [2024-11-19 10:23:37.986564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e99d8 00:23:18.473 [2024-11-19 10:23:37.987270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.473 [2024-11-19 10:23:37.987308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:18.473 [2024-11-19 10:23:37.999252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fd640 00:23:18.473 [2024-11-19 10:23:37.999928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.473 [2024-11-19 10:23:37.999969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:18.473 [2024-11-19 10:23:38.010040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fd208 00:23:18.473 [2024-11-19 10:23:38.011101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.473 [2024-11-19 10:23:38.011153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:18.731 [2024-11-19 10:23:38.021912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f5378 00:23:18.731 [2024-11-19 10:23:38.022812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-19 10:23:38.022870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:18.731 [2024-11-19 10:23:38.033678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f7970 00:23:18.731 [2024-11-19 10:23:38.034246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-19 10:23:38.034291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:18.731 [2024-11-19 10:23:38.047766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190fa7d8 00:23:18.731 [2024-11-19 10:23:38.049268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-19 10:23:38.049313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.731 [2024-11-19 10:23:38.059418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f4f40 00:23:18.731 [2024-11-19 10:23:38.060897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-19 10:23:38.060937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.731 [2024-11-19 10:23:38.070979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e0630 00:23:18.731 [2024-11-19 10:23:38.072431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-19 10:23:38.072479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:18.731 [2024-11-19 10:23:38.082495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190f20d8 00:23:18.731 [2024-11-19 10:23:38.084232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-19 10:23:38.084274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:18.731 [2024-11-19 10:23:38.093007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e5220 00:23:18.731 [2024-11-19 10:23:38.094466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-19 10:23:38.094505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:18.731 [2024-11-19 10:23:38.104851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ea00) with pdu=0x2000190e5a90 00:23:18.731 [2024-11-19 10:23:38.105552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:18.731 [2024-11-19 10:23:38.105589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:18.731 00:23:18.731 Latency(us) 00:23:18.731 [2024-11-19T10:23:38.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.731 [2024-11-19T10:23:38.277Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:18.731 nvme0n1 : 2.01 20863.47 81.50 0.00 0.00 6129.29 2398.02 16443.58 00:23:18.731 [2024-11-19T10:23:38.277Z] =================================================================================================================== 00:23:18.731 [2024-11-19T10:23:38.277Z] Total : 20863.47 81.50 0.00 0.00 6129.29 2398.02 16443.58 00:23:18.731 0 00:23:18.731 10:23:38 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:18.731 10:23:38 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:18.731 10:23:38 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:18.731 | .driver_specific 00:23:18.731 | .nvme_error 00:23:18.731 | .status_code 00:23:18.731 | .command_transient_transport_error' 00:23:18.731 10:23:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:18.989 10:23:38 -- host/digest.sh@71 -- # (( 164 > 0 )) 00:23:18.989 10:23:38 -- host/digest.sh@73 -- # killprocess 97270 00:23:18.989 10:23:38 -- common/autotest_common.sh@936 -- # '[' -z 97270 ']' 00:23:18.989 10:23:38 -- common/autotest_common.sh@940 -- # kill -0 97270 00:23:18.989 10:23:38 -- common/autotest_common.sh@941 -- # uname 00:23:18.989 10:23:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:18.989 10:23:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97270 00:23:18.989 10:23:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:18.989 10:23:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:18.989 10:23:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97270' 00:23:18.989 killing process with pid 97270 00:23:18.989 Received shutdown signal, test time was about 2.000000 seconds 00:23:18.989 00:23:18.989 Latency(us) 00:23:18.989 [2024-11-19T10:23:38.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.989 [2024-11-19T10:23:38.535Z] =================================================================================================================== 00:23:18.990 [2024-11-19T10:23:38.536Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.990 10:23:38 -- common/autotest_common.sh@955 -- # kill 97270 00:23:18.990 10:23:38 -- common/autotest_common.sh@960 -- # wait 97270 00:23:19.248 10:23:38 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:23:19.248 10:23:38 -- host/digest.sh@54 -- # local rw bs qd 00:23:19.248 10:23:38 -- host/digest.sh@56 -- # rw=randwrite 00:23:19.248 10:23:38 -- host/digest.sh@56 -- # bs=131072 00:23:19.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:19.248 10:23:38 -- host/digest.sh@56 -- # qd=16 00:23:19.248 10:23:38 -- host/digest.sh@58 -- # bperfpid=97341 00:23:19.248 10:23:38 -- host/digest.sh@60 -- # waitforlisten 97341 /var/tmp/bperf.sock 00:23:19.248 10:23:38 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:19.248 10:23:38 -- common/autotest_common.sh@829 -- # '[' -z 97341 ']' 00:23:19.248 10:23:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:19.248 10:23:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.248 10:23:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:19.248 10:23:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.248 10:23:38 -- common/autotest_common.sh@10 -- # set +x 00:23:19.248 [2024-11-19 10:23:38.647722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:19.248 [2024-11-19 10:23:38.648038] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97341 ] 00:23:19.248 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:19.248 Zero copy mechanism will not be used. 00:23:19.248 [2024-11-19 10:23:38.782274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.506 [2024-11-19 10:23:38.829612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.443 10:23:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.443 10:23:39 -- common/autotest_common.sh@862 -- # return 0 00:23:20.443 10:23:39 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:20.443 10:23:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:20.701 10:23:40 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:20.701 10:23:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.701 10:23:40 -- common/autotest_common.sh@10 -- # set +x 00:23:20.701 10:23:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.701 10:23:40 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:20.701 10:23:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:20.959 nvme0n1 00:23:20.959 10:23:40 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:20.959 10:23:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.959 10:23:40 -- common/autotest_common.sh@10 -- # set +x 00:23:20.959 10:23:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.960 10:23:40 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:20.960 10:23:40 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:21.220 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:21.220 Zero copy mechanism will not be used. 00:23:21.220 Running I/O for 2 seconds... 00:23:21.220 [2024-11-19 10:23:40.537314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.220 [2024-11-19 10:23:40.537719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.220 [2024-11-19 10:23:40.537770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.220 [2024-11-19 10:23:40.541964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.220 [2024-11-19 10:23:40.542254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.220 [2024-11-19 10:23:40.542310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.220 [2024-11-19 10:23:40.546493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.220 [2024-11-19 10:23:40.546614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.220 [2024-11-19 10:23:40.546643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.220 [2024-11-19 10:23:40.551013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.220 [2024-11-19 10:23:40.551118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.220 [2024-11-19 10:23:40.551145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.220 [2024-11-19 10:23:40.555349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.220 [2024-11-19 10:23:40.555498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.220 [2024-11-19 10:23:40.555542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.220 [2024-11-19 10:23:40.559885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.220 [2024-11-19 10:23:40.559991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.220 [2024-11-19 10:23:40.560017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.220 [2024-11-19 10:23:40.564389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.220 [2024-11-19 10:23:40.564564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.220 [2024-11-19 10:23:40.564601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.568929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.569186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.569229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.573429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.573587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.573643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.578015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.578181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.578230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.582525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.582756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.582801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.586941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.587050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.587076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.591436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.591550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.591574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.595993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.596169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.596204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.600464] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.600614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.600639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.605110] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.605305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.605341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.609630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.609748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.609773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.614175] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.614329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.614364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.618675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.618846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.618890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.623210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.623327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.623351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.627718] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.627813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.627853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.632255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.632425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.632461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.636859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.636991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.637016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.641407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.641603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.641641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.645919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.646030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.646054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.650426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.650561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.650595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.654976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.655117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.655142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.659490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.659610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.659635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.664060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.664156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.221 [2024-11-19 10:23:40.664180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.221 [2024-11-19 10:23:40.668540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.221 [2024-11-19 10:23:40.668708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.668744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.673125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.673279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.673304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.677648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.677870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.677911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.682350] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.682549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.682585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.686913] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.687080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.687105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.691437] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.691567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.691598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.695926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.696049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.696074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.700419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.700533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.700558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.705034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.705193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.705223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.709531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.709669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.709699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.714124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.714316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.714351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.718603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.718731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.718755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.723156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.723305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.723340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.727648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.727780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.727804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.732159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.732289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.732320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.736655] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.736771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.736795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.741222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.741383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.741417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.745705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.745882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.745919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.750269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.750482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.750516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.754810] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.754984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.755034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.222 [2024-11-19 10:23:40.759308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.222 [2024-11-19 10:23:40.759442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.222 [2024-11-19 10:23:40.759472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.483 [2024-11-19 10:23:40.763873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.483 [2024-11-19 10:23:40.764005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.483 [2024-11-19 10:23:40.764043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.483 [2024-11-19 10:23:40.768321] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.483 [2024-11-19 10:23:40.768437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.483 [2024-11-19 10:23:40.768461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.483 [2024-11-19 10:23:40.772871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.483 [2024-11-19 10:23:40.772979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.483 [2024-11-19 10:23:40.773003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.483 [2024-11-19 10:23:40.777414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.483 [2024-11-19 10:23:40.777580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.483 [2024-11-19 10:23:40.777616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.483 [2024-11-19 10:23:40.781982] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.483 [2024-11-19 10:23:40.782152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.483 [2024-11-19 10:23:40.782187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.483 [2024-11-19 10:23:40.786618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.483 [2024-11-19 10:23:40.786833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.483 [2024-11-19 10:23:40.786867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.483 [2024-11-19 10:23:40.791170] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.483 [2024-11-19 10:23:40.791299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.483 [2024-11-19 10:23:40.791323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.483 [2024-11-19 10:23:40.795680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.483 [2024-11-19 10:23:40.795851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.483 [2024-11-19 10:23:40.795885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.483 [2024-11-19 10:23:40.800165] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.483 [2024-11-19 10:23:40.800317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.483 [2024-11-19 10:23:40.800346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.483 [2024-11-19 10:23:40.804675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.483 [2024-11-19 10:23:40.804775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.483 [2024-11-19 10:23:40.804799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.483 [2024-11-19 10:23:40.809149] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.483 [2024-11-19 10:23:40.809252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.483 [2024-11-19 10:23:40.809276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.483 [2024-11-19 10:23:40.813742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.483 [2024-11-19 10:23:40.813919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.483 [2024-11-19 10:23:40.813961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.483 [2024-11-19 10:23:40.818196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.483 [2024-11-19 10:23:40.818356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.483 [2024-11-19 10:23:40.818383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.483 [2024-11-19 10:23:40.822791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.483 [2024-11-19 10:23:40.823022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.483 [2024-11-19 10:23:40.823057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.483 [2024-11-19 10:23:40.827327] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.483 [2024-11-19 10:23:40.827482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.483 [2024-11-19 10:23:40.827518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.483 [2024-11-19 10:23:40.831934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.483 [2024-11-19 10:23:40.832066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.483 [2024-11-19 10:23:40.832091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.836418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.836586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.836621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.840996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.841092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.841115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.845484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.845581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.845605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.850076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.850235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.850261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.854598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.854735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.854759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.859310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.859521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.859555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.863858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.863979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.864003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.868359] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.868510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.868545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.872957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.873095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.873119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.877402] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.877510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.877535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.881921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.882015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.882039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.886423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.886583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.886618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.891007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.891160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.891194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.895575] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.895789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.895841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.900070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.900203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.900229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.904583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.904735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.904760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.909157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.909290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.909314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.913676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.913774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.913799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.918171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.918284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.918308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.922799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.922975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.923018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.927395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.927555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.927591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.932044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.932246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.932281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.936659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.936779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.936803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.941207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.484 [2024-11-19 10:23:40.941343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.484 [2024-11-19 10:23:40.941381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.484 [2024-11-19 10:23:40.945842] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:40.946000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:40.946032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:40.950308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:40.950430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:40.950455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:40.954785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:40.954897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:40.954921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:40.959426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:40.959594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:40.959628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:40.963995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:40.964148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:40.964172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:40.968608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:40.968812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:40.968874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:40.973095] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:40.973203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:40.973227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:40.977617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:40.977752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:40.977778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:40.982174] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:40.982318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:40.982342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:40.986584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:40.986707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:40.986730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:40.991141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:40.991253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:40.991278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:40.995707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:40.995883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:40.995918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:41.000250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:41.000384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:41.000409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:41.004857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:41.005067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:41.005101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:41.009376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:41.009489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:41.009512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:41.013919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:41.014078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:41.014104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:41.018408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:41.018556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:41.018590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.485 [2024-11-19 10:23:41.022921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.485 [2024-11-19 10:23:41.023028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.485 [2024-11-19 10:23:41.023051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.745 [2024-11-19 10:23:41.027409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.745 [2024-11-19 10:23:41.027517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.745 [2024-11-19 10:23:41.027541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.745 [2024-11-19 10:23:41.031991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.745 [2024-11-19 10:23:41.032152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.745 [2024-11-19 10:23:41.032187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.745 [2024-11-19 10:23:41.036537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.745 [2024-11-19 10:23:41.036696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.745 [2024-11-19 10:23:41.036727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.745 [2024-11-19 10:23:41.041080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.745 [2024-11-19 10:23:41.041271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.745 [2024-11-19 10:23:41.041305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.745 [2024-11-19 10:23:41.045656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.745 [2024-11-19 10:23:41.045766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.745 [2024-11-19 10:23:41.045790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.745 [2024-11-19 10:23:41.050118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.745 [2024-11-19 10:23:41.050275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.745 [2024-11-19 10:23:41.050309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.745 [2024-11-19 10:23:41.054630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.745 [2024-11-19 10:23:41.054758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.745 [2024-11-19 10:23:41.054782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.745 [2024-11-19 10:23:41.059130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.745 [2024-11-19 10:23:41.059234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.745 [2024-11-19 10:23:41.059258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.745 [2024-11-19 10:23:41.063617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.745 [2024-11-19 10:23:41.063733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.745 [2024-11-19 10:23:41.063757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.745 [2024-11-19 10:23:41.068184] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.745 [2024-11-19 10:23:41.068346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.745 [2024-11-19 10:23:41.068372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.745 [2024-11-19 10:23:41.072665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.745 [2024-11-19 10:23:41.072839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.072873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.077275] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.077470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.077505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.081764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.081894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.081917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.086253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.086400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.086424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.090769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.090915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.090939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.095269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.095367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.095392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.099750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.099886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.099911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.104375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.104535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.104570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.108901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.109045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.109069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.113509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.113724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.113765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.118015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.118144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.118168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.122527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.122675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.122716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.127053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.127183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.127207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.131479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.131578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.131602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.136009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.136105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.136129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.140500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.140660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.140695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.145032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.145182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.145217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.149607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.149799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.149846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.154054] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.154248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.154283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.158567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.158697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.158721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.163123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.163253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.163277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.167543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.167642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.167667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.172074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.172173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.172197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.176648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.176814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.176860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.746 [2024-11-19 10:23:41.181192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.746 [2024-11-19 10:23:41.181334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.746 [2024-11-19 10:23:41.181360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.185755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.185969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.186004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.190304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.190432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.190456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.194871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.195045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.195075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.199387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.199532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.199568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.203801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.203931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.203956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.208356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.208473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.208497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.212949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.213121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.213156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.217530] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.217678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.217705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.222190] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.222386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.222421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.226700] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.226917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.226952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.231323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.231473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.231497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.235957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.236115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.236151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.240440] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.240541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.240578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.245044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.245170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.245193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.249638] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.249807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.249854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.254202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.254375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.254412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.258863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.259093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.259132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.263408] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.263548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.263584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.267939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.268090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.268120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.272445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.272593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.272635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.277032] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.277128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.277153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.281547] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.281644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.281669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.747 [2024-11-19 10:23:41.286191] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:21.747 [2024-11-19 10:23:41.286357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.747 [2024-11-19 10:23:41.286403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.008 [2024-11-19 10:23:41.290758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.008 [2024-11-19 10:23:41.290935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.008 [2024-11-19 10:23:41.290966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.008 [2024-11-19 10:23:41.295334] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.008 [2024-11-19 10:23:41.295545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.008 [2024-11-19 10:23:41.295580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.008 [2024-11-19 10:23:41.299859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.008 [2024-11-19 10:23:41.299981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.008 [2024-11-19 10:23:41.300005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.008 [2024-11-19 10:23:41.304386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.008 [2024-11-19 10:23:41.304523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.008 [2024-11-19 10:23:41.304548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.008 [2024-11-19 10:23:41.309061] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.008 [2024-11-19 10:23:41.309191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.309216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.313577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.313693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.313717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.318064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.318163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.318188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.322595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.322762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.322789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.327189] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.327347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.327382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.331784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.332019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.332054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.336342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.336478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.336502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.340906] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.341038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.341068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.345451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.345599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.345623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.350005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.350102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.350126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.354495] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.354598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.354626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.359037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.359197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.359231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.363540] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.363676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.363700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.368122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.368339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.368374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.372619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.372749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.372773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.377084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.377236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.377266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.381603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.381736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.381766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.386082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.386194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.386218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.390604] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.390716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.390739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.395156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.395318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.395353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.399681] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.399870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.399904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.404320] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.404535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.404570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.408813] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.009 [2024-11-19 10:23:41.408939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.009 [2024-11-19 10:23:41.408964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.009 [2024-11-19 10:23:41.413354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.413489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.413520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.417925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.418082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.418116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.422418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.422537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.422561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.426967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.427092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.427116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.431558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.431720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.431756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.436120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.436245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.436284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.440759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.440968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.441010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.445268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.445463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.445503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.449837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.449990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.450024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.454344] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.454508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.454543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.458893] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.459011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.459037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.463412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.463509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.463533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.467966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.468138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.468173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.472537] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.472683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.472707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.477127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.477318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.477353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.481605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.481723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.481747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.486085] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.486217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.486247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.490624] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.490756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.490781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.495179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.495281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.495305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.499693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.499791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.499814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.504248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.504409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.504434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.508729] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.508886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.508921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.513314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.513506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.010 [2024-11-19 10:23:41.513543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.010 [2024-11-19 10:23:41.517848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.010 [2024-11-19 10:23:41.517970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.011 [2024-11-19 10:23:41.517993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.011 [2024-11-19 10:23:41.522364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.011 [2024-11-19 10:23:41.522499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.011 [2024-11-19 10:23:41.522522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.011 [2024-11-19 10:23:41.526805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.011 [2024-11-19 10:23:41.526982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.011 [2024-11-19 10:23:41.527025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.011 [2024-11-19 10:23:41.531293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.011 [2024-11-19 10:23:41.531395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.011 [2024-11-19 10:23:41.531420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.011 [2024-11-19 10:23:41.535765] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.011 [2024-11-19 10:23:41.535881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.011 [2024-11-19 10:23:41.535905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.011 [2024-11-19 10:23:41.540377] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.011 [2024-11-19 10:23:41.540550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.011 [2024-11-19 10:23:41.540584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.011 [2024-11-19 10:23:41.544951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.011 [2024-11-19 10:23:41.545109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.011 [2024-11-19 10:23:41.545144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.011 [2024-11-19 10:23:41.549530] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.011 [2024-11-19 10:23:41.549726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.011 [2024-11-19 10:23:41.549760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.554014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.554214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.554248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.558541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.558692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.558726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.563070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.563232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.563266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.567545] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.567661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.567686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.572076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.572172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.572197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.576556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.576735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.576770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.581202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.581357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.581391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.585812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.586030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.586065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.590316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.590472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.590506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.594927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.595082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.595116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.599453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.599595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.599629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.603912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.604050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.604089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.608373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.608489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.608519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.612953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.613112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.613146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.617517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.617655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.617690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.622186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.622379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.622420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.626650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.626778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.626813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.631205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.631340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.631375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.635771] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.635945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.635980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.640281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.640403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.640434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.644754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.644866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.644890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.649390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.649551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.649588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.653983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.654196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.654232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.658542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.272 [2024-11-19 10:23:41.658753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.272 [2024-11-19 10:23:41.658789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.272 [2024-11-19 10:23:41.663151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.663278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.663303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.668001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.668135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.668171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.672601] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.672741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.672776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.677133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.677249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.677275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.681662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.681759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.681783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.686288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.686462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.686501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.690898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.691047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.691082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.695521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.695734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.695774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.700027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.700223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.700263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.704536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.704690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.704725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.709088] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.709252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.709293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.713543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.713645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.713669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.718045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.718159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.718183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.722687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.722877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.722917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.727261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.727420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.727461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.731938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.732130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.732170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.736419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.736543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.736573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.740928] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.741072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.741107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.745418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.745547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.745581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.749919] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.750028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.750061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.754410] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.754520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.754544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.758976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.759152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.759192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.763536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.763693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.763730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.768087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.768295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.768329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.772627] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.772730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.772754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.777093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.777235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.777258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.781611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.273 [2024-11-19 10:23:41.781742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.273 [2024-11-19 10:23:41.781767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.273 [2024-11-19 10:23:41.786040] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.274 [2024-11-19 10:23:41.786151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-11-19 10:23:41.786175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.274 [2024-11-19 10:23:41.790527] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.274 [2024-11-19 10:23:41.790639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-11-19 10:23:41.790662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.274 [2024-11-19 10:23:41.795137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.274 [2024-11-19 10:23:41.795301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-11-19 10:23:41.795336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.274 [2024-11-19 10:23:41.799641] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.274 [2024-11-19 10:23:41.799853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-11-19 10:23:41.799880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.274 [2024-11-19 10:23:41.804238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.274 [2024-11-19 10:23:41.804449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-11-19 10:23:41.804489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.274 [2024-11-19 10:23:41.808709] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.274 [2024-11-19 10:23:41.808857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-11-19 10:23:41.808882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.274 [2024-11-19 10:23:41.813253] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.274 [2024-11-19 10:23:41.813459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.274 [2024-11-19 10:23:41.813494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.534 [2024-11-19 10:23:41.817767] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.534 [2024-11-19 10:23:41.817939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.534 [2024-11-19 10:23:41.817982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.534 [2024-11-19 10:23:41.822282] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.534 [2024-11-19 10:23:41.822399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.534 [2024-11-19 10:23:41.822423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.534 [2024-11-19 10:23:41.826875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.534 [2024-11-19 10:23:41.826972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.534 [2024-11-19 10:23:41.827006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.534 [2024-11-19 10:23:41.831427] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.534 [2024-11-19 10:23:41.831586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.534 [2024-11-19 10:23:41.831616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.534 [2024-11-19 10:23:41.835996] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.534 [2024-11-19 10:23:41.836161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.534 [2024-11-19 10:23:41.836191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.534 [2024-11-19 10:23:41.840594] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.534 [2024-11-19 10:23:41.840787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.534 [2024-11-19 10:23:41.840839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.534 [2024-11-19 10:23:41.845116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.534 [2024-11-19 10:23:41.845316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.534 [2024-11-19 10:23:41.845349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.534 [2024-11-19 10:23:41.849645] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.534 [2024-11-19 10:23:41.849802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.534 [2024-11-19 10:23:41.849852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.534 [2024-11-19 10:23:41.854180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.534 [2024-11-19 10:23:41.854319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.534 [2024-11-19 10:23:41.854342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.534 [2024-11-19 10:23:41.858664] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.534 [2024-11-19 10:23:41.858770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.534 [2024-11-19 10:23:41.858794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.534 [2024-11-19 10:23:41.863211] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.863327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.863351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.867721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.867904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.867939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.872308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.872468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.872493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.876900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.877117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.877153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.881407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.881554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.881585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.885964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.886107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.886132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.890491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.890622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.890646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.894951] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.895060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.895084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.899417] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.899534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.899557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.903991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.904152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.904176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.908492] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.908645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.908669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.913118] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.913318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.913342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.917639] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.917749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.917773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.922188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.922334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.922358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.926733] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.926877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.926917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.931297] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.931400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.931425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.935851] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.935954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.935978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.940398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.940557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.940581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.944921] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.945062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.945085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.949487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.949689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.949714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.954026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.954220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.535 [2024-11-19 10:23:41.954244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.535 [2024-11-19 10:23:41.958557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.535 [2024-11-19 10:23:41.958686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:41.958709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:41.963051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:41.963207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:41.963231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:41.967562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:41.967658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:41.967682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:41.972034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:41.972147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:41.972170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:41.976576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:41.976743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:41.976767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:41.981137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:41.981286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:41.981310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:41.985688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:41.985897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:41.985921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:41.990239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:41.990349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:41.990372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:41.994758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:41.994965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:41.994990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:41.999237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:41.999366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:41.999391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:42.003811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:42.003922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:42.003947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:42.008318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:42.008414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:42.008438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:42.012887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:42.013048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:42.013072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:42.017416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:42.017562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:42.017586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:42.022044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:42.022234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:42.022258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:42.026511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:42.026665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:42.026688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:42.031079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:42.031214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:42.031239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:42.035642] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:42.035778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:42.035802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:42.040115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:42.040231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:42.040256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:42.044626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:42.044723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:42.044747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:42.049133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:42.049298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:42.049322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.536 [2024-11-19 10:23:42.053647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.536 [2024-11-19 10:23:42.053784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.536 [2024-11-19 10:23:42.053808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.537 [2024-11-19 10:23:42.058362] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.537 [2024-11-19 10:23:42.058555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.537 [2024-11-19 10:23:42.058578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.537 [2024-11-19 10:23:42.062912] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.537 [2024-11-19 10:23:42.063045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.537 [2024-11-19 10:23:42.063069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.537 [2024-11-19 10:23:42.067395] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.537 [2024-11-19 10:23:42.067542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.537 [2024-11-19 10:23:42.067567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.537 [2024-11-19 10:23:42.071932] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.537 [2024-11-19 10:23:42.072064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.537 [2024-11-19 10:23:42.072088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.537 [2024-11-19 10:23:42.076398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.537 [2024-11-19 10:23:42.076514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.537 [2024-11-19 10:23:42.076542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.080911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.081029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.081053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.085454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.085612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.085636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.089963] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.090157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.090181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.094515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.094726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.094750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.099038] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.099169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.099193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.103560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.103708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.103732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.108062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.108195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.108219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.112560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.112660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.112684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.117066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.117163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.117187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.121626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.121786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.121830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.126160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.126314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.126352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.130724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.130952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.130987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.135288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.135402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.135427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.139845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.139983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.140017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.144438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.144579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.144603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.148904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.149022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.149066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.153477] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.153582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.153607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.158124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.158278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.158301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.162654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.162902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.163031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.167399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.167684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.167830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.171995] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.172103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.172127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.176517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.176658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.176682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.181111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.181245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.181270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.185564] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.185664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.798 [2024-11-19 10:23:42.185688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.798 [2024-11-19 10:23:42.190080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.798 [2024-11-19 10:23:42.190193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.190217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.194698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.194889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.194913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.199364] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.199549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.199574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.204081] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.204308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.204344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.208533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.208718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.208742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.213125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.213262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.213286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.217677] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.217816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.217857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.222166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.222270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.222294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.226712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.226837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.226862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.231373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.231537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.231560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.235937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.236081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.236105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.240497] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.240689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.240714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.245071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.245184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.245208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.249571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.249732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.249756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.254288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.254422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.254447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.258869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.259009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.259034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.263450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.263567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.263591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.268087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.268250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.268274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.272656] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.272834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.272858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.277261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.277474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.277498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.281796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.282010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.282041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.286401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.286554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.286578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.291084] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.291241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.291266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.295570] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.295687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.295712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.300136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.300256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.300280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.304745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.304927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.304952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.309312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.799 [2024-11-19 10:23:42.309454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.799 [2024-11-19 10:23:42.309478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.799 [2024-11-19 10:23:42.313924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.800 [2024-11-19 10:23:42.314116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.800 [2024-11-19 10:23:42.314141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.800 [2024-11-19 10:23:42.318433] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.800 [2024-11-19 10:23:42.318554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.800 [2024-11-19 10:23:42.318578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.800 [2024-11-19 10:23:42.323016] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.800 [2024-11-19 10:23:42.323179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.800 [2024-11-19 10:23:42.323203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.800 [2024-11-19 10:23:42.327626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.800 [2024-11-19 10:23:42.327758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.800 [2024-11-19 10:23:42.327781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.800 [2024-11-19 10:23:42.332139] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.800 [2024-11-19 10:23:42.332255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.800 [2024-11-19 10:23:42.332279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.800 [2024-11-19 10:23:42.336721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:22.800 [2024-11-19 10:23:42.336852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.800 [2024-11-19 10:23:42.336876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.060 [2024-11-19 10:23:42.341388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.060 [2024-11-19 10:23:42.341548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.060 [2024-11-19 10:23:42.341572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.060 [2024-11-19 10:23:42.345989] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.060 [2024-11-19 10:23:42.346139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.060 [2024-11-19 10:23:42.346163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.060 [2024-11-19 10:23:42.350544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.060 [2024-11-19 10:23:42.350736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.060 [2024-11-19 10:23:42.350760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.060 [2024-11-19 10:23:42.355202] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.060 [2024-11-19 10:23:42.355314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.060 [2024-11-19 10:23:42.355338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.060 [2024-11-19 10:23:42.359778] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.060 [2024-11-19 10:23:42.359926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.060 [2024-11-19 10:23:42.359950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.060 [2024-11-19 10:23:42.364386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.060 [2024-11-19 10:23:42.364545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.060 [2024-11-19 10:23:42.364569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.060 [2024-11-19 10:23:42.368852] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.060 [2024-11-19 10:23:42.368954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.060 [2024-11-19 10:23:42.368978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.060 [2024-11-19 10:23:42.373386] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.060 [2024-11-19 10:23:42.373509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.060 [2024-11-19 10:23:42.373533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.060 [2024-11-19 10:23:42.378049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.060 [2024-11-19 10:23:42.378211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.060 [2024-11-19 10:23:42.378248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.060 [2024-11-19 10:23:42.382654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.060 [2024-11-19 10:23:42.382796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.060 [2024-11-19 10:23:42.382837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.060 [2024-11-19 10:23:42.387261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.060 [2024-11-19 10:23:42.387455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.060 [2024-11-19 10:23:42.387490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.060 [2024-11-19 10:23:42.391799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.060 [2024-11-19 10:23:42.391927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.391951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.396366] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.396500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.396525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.400961] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.401090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.401115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.405499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.405622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.405646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.410075] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.410182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.410206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.414725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.414910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.414935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.419274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.419428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.419452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.424023] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.424222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.424262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.428562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.428699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.428723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.433227] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.433365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.433388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.437816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.437990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.438014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.442399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.442521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.442545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.447063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.447172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.447197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.451705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.451880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.451906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.456335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.456481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.456505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.461104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.461318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.461369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.465738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.465872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.465896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.470269] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.470430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.470454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.474944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.475090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.475114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.479528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.479629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.479653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.484137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.484232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.484256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.488692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.488867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.488902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.493353] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.493551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.493575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.497998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.498223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.498247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.502604] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.502715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.502738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.507235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.507370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.507394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.511776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.511944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.061 [2024-11-19 10:23:42.511969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.061 [2024-11-19 10:23:42.516365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.061 [2024-11-19 10:23:42.516487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.062 [2024-11-19 10:23:42.516511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:23.062 [2024-11-19 10:23:42.520957] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.062 [2024-11-19 10:23:42.521069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.062 [2024-11-19 10:23:42.521093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:23.062 [2024-11-19 10:23:42.525583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.062 [2024-11-19 10:23:42.525747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.062 [2024-11-19 10:23:42.525770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:23.062 [2024-11-19 10:23:42.530146] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf4ed40) with pdu=0x2000190fef90 00:23:23.062 [2024-11-19 10:23:42.530297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:23.062 [2024-11-19 10:23:42.530321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:23.062 00:23:23.062 Latency(us) 00:23:23.062 [2024-11-19T10:23:42.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.062 [2024-11-19T10:23:42.608Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:23.062 nvme0n1 : 2.00 6807.65 850.96 0.00 0.00 2344.80 1757.56 4974.78 00:23:23.062 [2024-11-19T10:23:42.608Z] =================================================================================================================== 00:23:23.062 [2024-11-19T10:23:42.608Z] Total : 6807.65 850.96 0.00 0.00 2344.80 1757.56 4974.78 00:23:23.062 0 00:23:23.062 10:23:42 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:23.062 10:23:42 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:23.062 10:23:42 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:23.062 10:23:42 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:23.062 | .driver_specific 00:23:23.062 | .nvme_error 00:23:23.062 | .status_code 00:23:23.062 | .command_transient_transport_error' 00:23:23.630 10:23:42 -- host/digest.sh@71 -- # (( 439 > 0 )) 00:23:23.630 10:23:42 -- host/digest.sh@73 -- # killprocess 97341 00:23:23.630 10:23:42 -- common/autotest_common.sh@936 -- # '[' -z 97341 ']' 00:23:23.630 10:23:42 -- common/autotest_common.sh@940 -- # kill -0 97341 00:23:23.630 10:23:42 -- common/autotest_common.sh@941 -- # uname 00:23:23.630 10:23:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:23.630 10:23:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97341 00:23:23.630 10:23:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:23.630 10:23:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:23.630 10:23:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97341' 00:23:23.630 killing process with pid 97341 00:23:23.630 10:23:42 -- common/autotest_common.sh@955 -- # kill 97341 00:23:23.630 Received shutdown signal, test time was about 2.000000 seconds 00:23:23.630 00:23:23.630 Latency(us) 00:23:23.630 [2024-11-19T10:23:43.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.630 [2024-11-19T10:23:43.176Z] =================================================================================================================== 00:23:23.630 [2024-11-19T10:23:43.176Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:23.630 10:23:42 -- common/autotest_common.sh@960 -- # wait 97341 00:23:23.630 10:23:43 -- host/digest.sh@115 -- # killprocess 97091 00:23:23.630 10:23:43 -- common/autotest_common.sh@936 -- # '[' -z 97091 ']' 00:23:23.630 10:23:43 -- common/autotest_common.sh@940 -- # kill -0 97091 00:23:23.630 10:23:43 -- common/autotest_common.sh@941 -- # uname 00:23:23.630 10:23:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:23.630 10:23:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97091 00:23:23.630 10:23:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:23.630 10:23:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:23.630 killing process with pid 97091 00:23:23.630 10:23:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97091' 00:23:23.630 10:23:43 -- common/autotest_common.sh@955 -- # kill 97091 00:23:23.630 10:23:43 -- common/autotest_common.sh@960 -- # wait 97091 00:23:23.889 00:23:23.889 real 0m15.914s 00:23:23.889 user 0m31.129s 00:23:23.889 sys 0m4.339s 00:23:23.889 10:23:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:23.889 10:23:43 -- common/autotest_common.sh@10 -- # set +x 00:23:23.889 ************************************ 00:23:23.889 END TEST nvmf_digest_error 00:23:23.889 ************************************ 00:23:23.889 10:23:43 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:23:23.889 10:23:43 -- host/digest.sh@139 -- # nvmftestfini 00:23:23.889 10:23:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:23.889 10:23:43 -- nvmf/common.sh@116 -- # sync 00:23:23.889 10:23:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:23.889 10:23:43 -- nvmf/common.sh@119 -- # set +e 00:23:23.889 10:23:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:23.889 10:23:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:23.889 rmmod nvme_tcp 00:23:23.889 rmmod nvme_fabrics 00:23:23.889 rmmod nvme_keyring 00:23:23.889 10:23:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:23.889 10:23:43 -- nvmf/common.sh@123 -- # set -e 00:23:23.889 10:23:43 -- nvmf/common.sh@124 -- # return 0 00:23:23.889 10:23:43 -- nvmf/common.sh@477 -- # '[' -n 97091 ']' 00:23:23.889 10:23:43 -- nvmf/common.sh@478 -- # killprocess 97091 00:23:23.889 10:23:43 -- common/autotest_common.sh@936 -- # '[' -z 97091 ']' 00:23:23.889 10:23:43 -- common/autotest_common.sh@940 -- # kill -0 97091 00:23:23.889 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (97091) - No such process 00:23:23.889 Process with pid 97091 is not found 00:23:23.889 10:23:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 97091 is not found' 00:23:23.889 10:23:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:23.889 10:23:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:23.889 10:23:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:23.889 10:23:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:23.889 10:23:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:23.889 10:23:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.889 10:23:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:23.889 10:23:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.889 10:23:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:23.889 00:23:23.889 real 0m32.043s 00:23:23.889 user 1m1.320s 00:23:23.889 sys 0m8.823s 00:23:23.889 10:23:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:23.889 10:23:43 -- common/autotest_common.sh@10 -- # set +x 00:23:23.889 ************************************ 00:23:23.889 END TEST nvmf_digest 00:23:23.889 ************************************ 00:23:23.889 10:23:43 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:23:23.889 10:23:43 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:23:23.889 10:23:43 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:23.889 10:23:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:24.149 10:23:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:24.149 10:23:43 -- common/autotest_common.sh@10 -- # set +x 00:23:24.149 ************************************ 00:23:24.149 START TEST nvmf_mdns_discovery 00:23:24.149 ************************************ 00:23:24.149 10:23:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:23:24.149 * Looking for test storage... 00:23:24.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:24.149 10:23:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:24.149 10:23:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:24.149 10:23:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:24.149 10:23:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:24.149 10:23:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:24.149 10:23:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:24.149 10:23:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:24.149 10:23:43 -- scripts/common.sh@335 -- # IFS=.-: 00:23:24.149 10:23:43 -- scripts/common.sh@335 -- # read -ra ver1 00:23:24.150 10:23:43 -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.150 10:23:43 -- scripts/common.sh@336 -- # read -ra ver2 00:23:24.150 10:23:43 -- scripts/common.sh@337 -- # local 'op=<' 00:23:24.150 10:23:43 -- scripts/common.sh@339 -- # ver1_l=2 00:23:24.150 10:23:43 -- scripts/common.sh@340 -- # ver2_l=1 00:23:24.150 10:23:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:24.150 10:23:43 -- scripts/common.sh@343 -- # case "$op" in 00:23:24.150 10:23:43 -- scripts/common.sh@344 -- # : 1 00:23:24.150 10:23:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:24.150 10:23:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.150 10:23:43 -- scripts/common.sh@364 -- # decimal 1 00:23:24.150 10:23:43 -- scripts/common.sh@352 -- # local d=1 00:23:24.150 10:23:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.150 10:23:43 -- scripts/common.sh@354 -- # echo 1 00:23:24.150 10:23:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:24.150 10:23:43 -- scripts/common.sh@365 -- # decimal 2 00:23:24.150 10:23:43 -- scripts/common.sh@352 -- # local d=2 00:23:24.150 10:23:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.150 10:23:43 -- scripts/common.sh@354 -- # echo 2 00:23:24.150 10:23:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:24.150 10:23:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:24.150 10:23:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:24.150 10:23:43 -- scripts/common.sh@367 -- # return 0 00:23:24.150 10:23:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.150 10:23:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:24.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.150 --rc genhtml_branch_coverage=1 00:23:24.150 --rc genhtml_function_coverage=1 00:23:24.150 --rc genhtml_legend=1 00:23:24.150 --rc geninfo_all_blocks=1 00:23:24.150 --rc geninfo_unexecuted_blocks=1 00:23:24.150 00:23:24.150 ' 00:23:24.150 10:23:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:24.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.150 --rc genhtml_branch_coverage=1 00:23:24.150 --rc genhtml_function_coverage=1 00:23:24.150 --rc genhtml_legend=1 00:23:24.150 --rc geninfo_all_blocks=1 00:23:24.150 --rc geninfo_unexecuted_blocks=1 00:23:24.150 00:23:24.150 ' 00:23:24.150 10:23:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:24.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.150 --rc genhtml_branch_coverage=1 00:23:24.150 --rc genhtml_function_coverage=1 00:23:24.150 --rc genhtml_legend=1 00:23:24.150 --rc geninfo_all_blocks=1 00:23:24.150 --rc geninfo_unexecuted_blocks=1 00:23:24.150 00:23:24.150 ' 00:23:24.150 10:23:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:24.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.150 --rc genhtml_branch_coverage=1 00:23:24.150 --rc genhtml_function_coverage=1 00:23:24.150 --rc genhtml_legend=1 00:23:24.150 --rc geninfo_all_blocks=1 00:23:24.150 --rc geninfo_unexecuted_blocks=1 00:23:24.150 00:23:24.150 ' 00:23:24.150 10:23:43 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:24.150 10:23:43 -- nvmf/common.sh@7 -- # uname -s 00:23:24.150 10:23:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.150 10:23:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.150 10:23:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.150 10:23:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.150 10:23:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.150 10:23:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.150 10:23:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.150 10:23:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.150 10:23:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.150 10:23:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.150 10:23:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:23:24.150 10:23:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:23:24.150 10:23:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.150 10:23:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.150 10:23:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:24.150 10:23:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:24.150 10:23:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.150 10:23:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.150 10:23:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.150 10:23:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.150 10:23:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.150 10:23:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.150 10:23:43 -- paths/export.sh@5 -- # export PATH 00:23:24.150 10:23:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.150 10:23:43 -- nvmf/common.sh@46 -- # : 0 00:23:24.150 10:23:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:24.150 10:23:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:24.150 10:23:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:24.150 10:23:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.150 10:23:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.150 10:23:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:24.150 10:23:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:24.150 10:23:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:24.150 10:23:43 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:23:24.150 10:23:43 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:23:24.150 10:23:43 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:24.150 10:23:43 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:24.150 10:23:43 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:23:24.150 10:23:43 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:24.150 10:23:43 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:23:24.150 10:23:43 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:23:24.150 10:23:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:24.150 10:23:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.150 10:23:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:24.150 10:23:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:24.150 10:23:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:24.150 10:23:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.150 10:23:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.150 10:23:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.150 10:23:43 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:24.150 10:23:43 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:24.150 10:23:43 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:24.150 10:23:43 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:24.150 10:23:43 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:24.150 10:23:43 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:24.150 10:23:43 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.150 10:23:43 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.150 10:23:43 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:24.150 10:23:43 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:24.150 10:23:43 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:24.150 10:23:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:24.150 10:23:43 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:24.150 10:23:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.150 10:23:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:24.150 10:23:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:24.150 10:23:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:24.150 10:23:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:24.150 10:23:43 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:24.150 10:23:43 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:24.150 Cannot find device "nvmf_tgt_br" 00:23:24.150 10:23:43 -- nvmf/common.sh@154 -- # true 00:23:24.150 10:23:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:24.150 Cannot find device "nvmf_tgt_br2" 00:23:24.150 10:23:43 -- nvmf/common.sh@155 -- # true 00:23:24.150 10:23:43 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:24.409 10:23:43 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:24.409 Cannot find device "nvmf_tgt_br" 00:23:24.409 10:23:43 -- nvmf/common.sh@157 -- # true 00:23:24.409 10:23:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:24.409 Cannot find device "nvmf_tgt_br2" 00:23:24.409 10:23:43 -- nvmf/common.sh@158 -- # true 00:23:24.409 10:23:43 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:24.409 10:23:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:24.410 10:23:43 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:24.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:24.410 10:23:43 -- nvmf/common.sh@161 -- # true 00:23:24.410 10:23:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:24.410 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:24.410 10:23:43 -- nvmf/common.sh@162 -- # true 00:23:24.410 10:23:43 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:24.410 10:23:43 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:24.410 10:23:43 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:24.410 10:23:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:24.410 10:23:43 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:24.410 10:23:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:24.410 10:23:43 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:24.410 10:23:43 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:24.410 10:23:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:24.410 10:23:43 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:24.410 10:23:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:24.410 10:23:43 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:24.410 10:23:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:24.410 10:23:43 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:24.410 10:23:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:24.410 10:23:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:24.410 10:23:43 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:24.410 10:23:43 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:24.410 10:23:43 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:24.410 10:23:43 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:24.410 10:23:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:24.410 10:23:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:24.410 10:23:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:24.669 10:23:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:24.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:23:24.669 00:23:24.669 --- 10.0.0.2 ping statistics --- 00:23:24.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.669 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:24.669 10:23:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:24.669 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:24.669 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:23:24.669 00:23:24.669 --- 10.0.0.3 ping statistics --- 00:23:24.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.669 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:24.669 10:23:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:24.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:23:24.669 00:23:24.669 --- 10.0.0.1 ping statistics --- 00:23:24.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.669 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:24.669 10:23:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.669 10:23:43 -- nvmf/common.sh@421 -- # return 0 00:23:24.669 10:23:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:24.669 10:23:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.669 10:23:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:24.669 10:23:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:24.669 10:23:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.669 10:23:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:24.669 10:23:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:24.669 10:23:43 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:24.669 10:23:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:24.669 10:23:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:24.669 10:23:43 -- common/autotest_common.sh@10 -- # set +x 00:23:24.669 10:23:43 -- nvmf/common.sh@469 -- # nvmfpid=97651 00:23:24.669 10:23:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:24.669 10:23:43 -- nvmf/common.sh@470 -- # waitforlisten 97651 00:23:24.669 10:23:44 -- common/autotest_common.sh@829 -- # '[' -z 97651 ']' 00:23:24.669 10:23:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.669 10:23:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.669 10:23:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.669 10:23:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.669 10:23:44 -- common/autotest_common.sh@10 -- # set +x 00:23:24.669 [2024-11-19 10:23:44.066622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:24.669 [2024-11-19 10:23:44.066769] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.928 [2024-11-19 10:23:44.218784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.928 [2024-11-19 10:23:44.261648] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:24.928 [2024-11-19 10:23:44.261990] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.928 [2024-11-19 10:23:44.262010] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.928 [2024-11-19 10:23:44.262023] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.928 [2024-11-19 10:23:44.262065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.928 10:23:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.928 10:23:44 -- common/autotest_common.sh@862 -- # return 0 00:23:24.928 10:23:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:24.928 10:23:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:24.928 10:23:44 -- common/autotest_common.sh@10 -- # set +x 00:23:24.928 10:23:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.928 10:23:44 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:23:24.928 10:23:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.928 10:23:44 -- common/autotest_common.sh@10 -- # set +x 00:23:24.928 10:23:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.928 10:23:44 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:23:24.928 10:23:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.928 10:23:44 -- common/autotest_common.sh@10 -- # set +x 00:23:24.928 10:23:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.928 10:23:44 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:24.928 10:23:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.928 10:23:44 -- common/autotest_common.sh@10 -- # set +x 00:23:24.928 [2024-11-19 10:23:44.421479] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.928 10:23:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.928 10:23:44 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:24.928 10:23:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.928 10:23:44 -- common/autotest_common.sh@10 -- # set +x 00:23:24.928 [2024-11-19 10:23:44.433622] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:24.928 10:23:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.928 10:23:44 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:24.928 10:23:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.928 10:23:44 -- common/autotest_common.sh@10 -- # set +x 00:23:24.928 null0 00:23:24.928 10:23:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.928 10:23:44 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:24.928 10:23:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.928 10:23:44 -- common/autotest_common.sh@10 -- # set +x 00:23:24.928 null1 00:23:24.928 10:23:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.928 10:23:44 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:23:24.928 10:23:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.928 10:23:44 -- common/autotest_common.sh@10 -- # set +x 00:23:24.928 null2 00:23:24.928 10:23:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.928 10:23:44 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:23:24.928 10:23:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.928 10:23:44 -- common/autotest_common.sh@10 -- # set +x 00:23:24.928 null3 00:23:24.928 10:23:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.928 10:23:44 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:23:24.928 10:23:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.928 10:23:44 -- common/autotest_common.sh@10 -- # set +x 00:23:25.187 10:23:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.187 10:23:44 -- host/mdns_discovery.sh@47 -- # hostpid=97687 00:23:25.187 10:23:44 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:25.187 10:23:44 -- host/mdns_discovery.sh@48 -- # waitforlisten 97687 /tmp/host.sock 00:23:25.187 10:23:44 -- common/autotest_common.sh@829 -- # '[' -z 97687 ']' 00:23:25.187 10:23:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:25.187 10:23:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.187 10:23:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:25.187 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:25.187 10:23:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.187 10:23:44 -- common/autotest_common.sh@10 -- # set +x 00:23:25.187 [2024-11-19 10:23:44.531283] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:25.187 [2024-11-19 10:23:44.531384] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97687 ] 00:23:25.187 [2024-11-19 10:23:44.671764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.187 [2024-11-19 10:23:44.710970] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:25.187 [2024-11-19 10:23:44.711217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.446 10:23:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.446 10:23:44 -- common/autotest_common.sh@862 -- # return 0 00:23:25.446 10:23:44 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:23:25.446 10:23:44 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:23:25.446 10:23:44 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:23:25.446 10:23:44 -- host/mdns_discovery.sh@57 -- # avahipid=97704 00:23:25.446 10:23:44 -- host/mdns_discovery.sh@58 -- # sleep 1 00:23:25.446 10:23:44 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:23:25.446 10:23:44 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:23:25.446 Process 1060 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:23:25.446 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:23:25.446 Successfully dropped root privileges. 00:23:25.446 avahi-daemon 0.8 starting up. 00:23:25.446 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:23:25.446 Successfully called chroot(). 00:23:25.446 Successfully dropped remaining capabilities. 00:23:25.446 No service file found in /etc/avahi/services. 00:23:26.380 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:26.380 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:23:26.380 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:26.380 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:23:26.380 Network interface enumeration completed. 00:23:26.380 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:23:26.380 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:23:26.380 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:23:26.380 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:23:26.380 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 2455385228. 00:23:26.380 10:23:45 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:26.380 10:23:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.380 10:23:45 -- common/autotest_common.sh@10 -- # set +x 00:23:26.380 10:23:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.380 10:23:45 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:26.380 10:23:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.380 10:23:45 -- common/autotest_common.sh@10 -- # set +x 00:23:26.639 10:23:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.639 10:23:45 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:23:26.639 10:23:45 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:23:26.639 10:23:45 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.639 10:23:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.639 10:23:45 -- common/autotest_common.sh@10 -- # set +x 00:23:26.639 10:23:45 -- host/mdns_discovery.sh@68 -- # sort 00:23:26.639 10:23:45 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:26.639 10:23:45 -- host/mdns_discovery.sh@68 -- # xargs 00:23:26.639 10:23:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.639 10:23:45 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:23:26.639 10:23:45 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:23:26.639 10:23:45 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.639 10:23:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.639 10:23:45 -- common/autotest_common.sh@10 -- # set +x 00:23:26.639 10:23:45 -- host/mdns_discovery.sh@64 -- # sort 00:23:26.639 10:23:45 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:26.639 10:23:45 -- host/mdns_discovery.sh@64 -- # xargs 00:23:26.639 10:23:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.639 10:23:46 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:23:26.639 10:23:46 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:26.639 10:23:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.639 10:23:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.639 10:23:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.639 10:23:46 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:23:26.639 10:23:46 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:26.639 10:23:46 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.639 10:23:46 -- host/mdns_discovery.sh@68 -- # xargs 00:23:26.639 10:23:46 -- host/mdns_discovery.sh@68 -- # sort 00:23:26.639 10:23:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.639 10:23:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.639 10:23:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.639 10:23:46 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:23:26.639 10:23:46 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:23:26.639 10:23:46 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:26.639 10:23:46 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.639 10:23:46 -- host/mdns_discovery.sh@64 -- # xargs 00:23:26.639 10:23:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.639 10:23:46 -- host/mdns_discovery.sh@64 -- # sort 00:23:26.639 10:23:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.639 10:23:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.639 10:23:46 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:23:26.639 10:23:46 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:26.639 10:23:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.639 10:23:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.639 10:23:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@68 -- # sort 00:23:26.898 10:23:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@68 -- # xargs 00:23:26.898 10:23:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.898 10:23:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.898 [2024-11-19 10:23:46.222675] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.898 10:23:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:26.898 10:23:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@64 -- # sort 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@64 -- # xargs 00:23:26.898 10:23:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:26.898 10:23:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.898 10:23:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.898 [2024-11-19 10:23:46.306211] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.898 10:23:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:26.898 10:23:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.898 10:23:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.898 10:23:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:23:26.898 10:23:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.898 10:23:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.898 10:23:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:23:26.898 10:23:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.898 10:23:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.898 10:23:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:23:26.898 10:23:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.898 10:23:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.898 10:23:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:26.898 10:23:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.898 10:23:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.898 [2024-11-19 10:23:46.350210] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:26.898 10:23:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:26.898 10:23:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.898 10:23:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.898 [2024-11-19 10:23:46.358150] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:26.898 10:23:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=97755 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@125 -- # sleep 5 00:23:26.898 10:23:46 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:23:27.833 [2024-11-19 10:23:47.122669] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:27.833 Established under name 'CDC' 00:23:28.091 [2024-11-19 10:23:47.522726] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:28.091 [2024-11-19 10:23:47.523034] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:28.091 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:28.091 cookie is 0 00:23:28.091 is_local: 1 00:23:28.091 our_own: 0 00:23:28.091 wide_area: 0 00:23:28.091 multicast: 1 00:23:28.091 cached: 1 00:23:28.091 [2024-11-19 10:23:47.622692] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:28.091 [2024-11-19 10:23:47.622736] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:28.091 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:28.091 cookie is 0 00:23:28.091 is_local: 1 00:23:28.091 our_own: 0 00:23:28.091 wide_area: 0 00:23:28.091 multicast: 1 00:23:28.091 cached: 1 00:23:29.025 [2024-11-19 10:23:48.528657] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:29.025 [2024-11-19 10:23:48.528697] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:29.025 [2024-11-19 10:23:48.528716] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:29.284 [2024-11-19 10:23:48.614792] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:23:29.284 [2024-11-19 10:23:48.628406] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:29.284 [2024-11-19 10:23:48.628542] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:29.284 [2024-11-19 10:23:48.628601] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.284 [2024-11-19 10:23:48.674466] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:29.284 [2024-11-19 10:23:48.674671] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:29.284 [2024-11-19 10:23:48.717102] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:23:29.284 [2024-11-19 10:23:48.778382] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:29.284 [2024-11-19 10:23:48.778588] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:32.567 10:23:51 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:32.567 10:23:51 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:32.567 10:23:51 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:32.567 10:23:51 -- host/mdns_discovery.sh@80 -- # sort 00:23:32.567 10:23:51 -- host/mdns_discovery.sh@80 -- # xargs 00:23:32.567 10:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.567 10:23:51 -- common/autotest_common.sh@10 -- # set +x 00:23:32.567 10:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.567 10:23:51 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:32.568 10:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.568 10:23:51 -- common/autotest_common.sh@10 -- # set +x 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@76 -- # sort 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@76 -- # xargs 00:23:32.568 10:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@68 -- # sort 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:32.568 10:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@68 -- # xargs 00:23:32.568 10:23:51 -- common/autotest_common.sh@10 -- # set +x 00:23:32.568 10:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.568 10:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:32.568 10:23:51 -- common/autotest_common.sh@10 -- # set +x 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@64 -- # sort 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@64 -- # xargs 00:23:32.568 10:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:32.568 10:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.568 10:23:51 -- common/autotest_common.sh@10 -- # set +x 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@72 -- # xargs 00:23:32.568 10:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:32.568 10:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.568 10:23:51 -- common/autotest_common.sh@10 -- # set +x 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@72 -- # xargs 00:23:32.568 10:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:32.568 10:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.568 10:23:51 -- common/autotest_common.sh@10 -- # set +x 00:23:32.568 10:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:32.568 10:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.568 10:23:51 -- common/autotest_common.sh@10 -- # set +x 00:23:32.568 10:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:32.568 10:23:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.568 10:23:51 -- common/autotest_common.sh@10 -- # set +x 00:23:32.568 10:23:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.568 10:23:51 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:33.503 10:23:52 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:33.503 10:23:52 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.503 10:23:52 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:33.503 10:23:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.503 10:23:52 -- common/autotest_common.sh@10 -- # set +x 00:23:33.503 10:23:52 -- host/mdns_discovery.sh@64 -- # sort 00:23:33.503 10:23:52 -- host/mdns_discovery.sh@64 -- # xargs 00:23:33.503 10:23:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.503 10:23:52 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:33.503 10:23:52 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:33.503 10:23:52 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:33.503 10:23:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.503 10:23:52 -- common/autotest_common.sh@10 -- # set +x 00:23:33.503 10:23:52 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:33.503 10:23:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.503 10:23:52 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:33.503 10:23:52 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:33.503 10:23:52 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:33.503 10:23:52 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:33.503 10:23:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.503 10:23:52 -- common/autotest_common.sh@10 -- # set +x 00:23:33.503 [2024-11-19 10:23:52.925295] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:33.503 [2024-11-19 10:23:52.926312] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:33.503 [2024-11-19 10:23:52.926343] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:33.503 [2024-11-19 10:23:52.926380] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:33.503 [2024-11-19 10:23:52.926401] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:33.503 10:23:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.504 10:23:52 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:33.504 10:23:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.504 10:23:52 -- common/autotest_common.sh@10 -- # set +x 00:23:33.504 [2024-11-19 10:23:52.933207] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:33.504 [2024-11-19 10:23:52.934301] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:33.504 [2024-11-19 10:23:52.934363] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:33.504 10:23:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.504 10:23:52 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:33.763 [2024-11-19 10:23:53.065426] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:33.763 [2024-11-19 10:23:53.065630] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:33.763 [2024-11-19 10:23:53.122712] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:33.763 [2024-11-19 10:23:53.122759] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:33.763 [2024-11-19 10:23:53.122767] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:33.763 [2024-11-19 10:23:53.122791] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:33.763 [2024-11-19 10:23:53.122859] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:33.763 [2024-11-19 10:23:53.122870] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:33.763 [2024-11-19 10:23:53.122877] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:33.763 [2024-11-19 10:23:53.122892] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:33.763 [2024-11-19 10:23:53.168569] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:33.763 [2024-11-19 10:23:53.168616] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:33.763 [2024-11-19 10:23:53.168672] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:33.763 [2024-11-19 10:23:53.168686] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:34.697 10:23:53 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:34.697 10:23:53 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:34.697 10:23:53 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:34.698 10:23:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.698 10:23:53 -- common/autotest_common.sh@10 -- # set +x 00:23:34.698 10:23:53 -- host/mdns_discovery.sh@68 -- # sort 00:23:34.698 10:23:53 -- host/mdns_discovery.sh@68 -- # xargs 00:23:34.698 10:23:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.698 10:23:53 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:34.698 10:23:53 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.698 10:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.698 10:23:54 -- common/autotest_common.sh@10 -- # set +x 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@64 -- # sort 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@64 -- # xargs 00:23:34.698 10:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:34.698 10:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.698 10:23:54 -- common/autotest_common.sh@10 -- # set +x 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@72 -- # xargs 00:23:34.698 10:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:34.698 10:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.698 10:23:54 -- common/autotest_common.sh@10 -- # set +x 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@72 -- # xargs 00:23:34.698 10:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:34.698 10:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.698 10:23:54 -- common/autotest_common.sh@10 -- # set +x 00:23:34.698 10:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:34.698 10:23:54 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:34.698 10:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.698 10:23:54 -- common/autotest_common.sh@10 -- # set +x 00:23:34.698 [2024-11-19 10:23:54.242101] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:34.698 [2024-11-19 10:23:54.242141] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:34.958 [2024-11-19 10:23:54.243066] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:34.958 [2024-11-19 10:23:54.243094] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:34.958 10:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.958 10:23:54 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:34.958 10:23:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.958 10:23:54 -- common/autotest_common.sh@10 -- # set +x 00:23:34.958 [2024-11-19 10:23:54.248367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.958 [2024-11-19 10:23:54.248406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.958 [2024-11-19 10:23:54.248428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.958 [2024-11-19 10:23:54.248439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.958 [2024-11-19 10:23:54.248450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.958 [2024-11-19 10:23:54.248459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.958 [2024-11-19 10:23:54.248469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.958 [2024-11-19 10:23:54.248478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.958 [2024-11-19 10:23:54.248488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542d00 is same with the state(5) to be set 00:23:34.958 [2024-11-19 10:23:54.250086] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:34.958 [2024-11-19 10:23:54.250147] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:34.958 10:23:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.958 10:23:54 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:34.958 [2024-11-19 10:23:54.255370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.958 [2024-11-19 10:23:54.255405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.958 [2024-11-19 10:23:54.255418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.958 [2024-11-19 10:23:54.255429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.958 [2024-11-19 10:23:54.255439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.958 [2024-11-19 10:23:54.255448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.958 [2024-11-19 10:23:54.255458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.958 [2024-11-19 10:23:54.255468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.958 [2024-11-19 10:23:54.255482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f0070 is same with the state(5) to be set 00:23:34.958 [2024-11-19 10:23:54.258323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x542d00 (9): Bad file descriptor 00:23:34.958 [2024-11-19 10:23:54.265339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f0070 (9): Bad file descriptor 00:23:34.958 [2024-11-19 10:23:54.268345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:34.958 [2024-11-19 10:23:54.268465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.958 [2024-11-19 10:23:54.268515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.958 [2024-11-19 10:23:54.268532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x542d00 with addr=10.0.0.2, port=4420 00:23:34.958 [2024-11-19 10:23:54.268543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542d00 is same with the state(5) to be set 00:23:34.958 [2024-11-19 10:23:54.268560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x542d00 (9): Bad file descriptor 00:23:34.958 [2024-11-19 10:23:54.268575] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:34.958 [2024-11-19 10:23:54.268585] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:34.958 [2024-11-19 10:23:54.268595] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:34.958 [2024-11-19 10:23:54.268612] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.958 [2024-11-19 10:23:54.275348] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:34.958 [2024-11-19 10:23:54.275445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.958 [2024-11-19 10:23:54.275492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.958 [2024-11-19 10:23:54.275508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f0070 with addr=10.0.0.3, port=4420 00:23:34.958 [2024-11-19 10:23:54.275520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f0070 is same with the state(5) to be set 00:23:34.958 [2024-11-19 10:23:54.275536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f0070 (9): Bad file descriptor 00:23:34.958 [2024-11-19 10:23:54.275551] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:34.958 [2024-11-19 10:23:54.275559] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:34.958 [2024-11-19 10:23:54.275569] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:34.958 [2024-11-19 10:23:54.275584] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.958 [2024-11-19 10:23:54.278409] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:34.958 [2024-11-19 10:23:54.278496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.958 [2024-11-19 10:23:54.278541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.958 [2024-11-19 10:23:54.278557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x542d00 with addr=10.0.0.2, port=4420 00:23:34.958 [2024-11-19 10:23:54.278568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542d00 is same with the state(5) to be set 00:23:34.958 [2024-11-19 10:23:54.278585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x542d00 (9): Bad file descriptor 00:23:34.958 [2024-11-19 10:23:54.278599] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:34.958 [2024-11-19 10:23:54.278607] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:34.958 [2024-11-19 10:23:54.278616] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:34.958 [2024-11-19 10:23:54.278631] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.958 [2024-11-19 10:23:54.285410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:34.958 [2024-11-19 10:23:54.285507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.958 [2024-11-19 10:23:54.285554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.958 [2024-11-19 10:23:54.285570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f0070 with addr=10.0.0.3, port=4420 00:23:34.958 [2024-11-19 10:23:54.285580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f0070 is same with the state(5) to be set 00:23:34.958 [2024-11-19 10:23:54.285597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f0070 (9): Bad file descriptor 00:23:34.958 [2024-11-19 10:23:54.285612] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:34.958 [2024-11-19 10:23:54.285620] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:34.958 [2024-11-19 10:23:54.285630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:34.958 [2024-11-19 10:23:54.285645] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.958 [2024-11-19 10:23:54.288465] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:34.958 [2024-11-19 10:23:54.288549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.958 [2024-11-19 10:23:54.288595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.288611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x542d00 with addr=10.0.0.2, port=4420 00:23:34.959 [2024-11-19 10:23:54.288621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542d00 is same with the state(5) to be set 00:23:34.959 [2024-11-19 10:23:54.288637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x542d00 (9): Bad file descriptor 00:23:34.959 [2024-11-19 10:23:54.288652] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:34.959 [2024-11-19 10:23:54.288661] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:34.959 [2024-11-19 10:23:54.288670] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:34.959 [2024-11-19 10:23:54.288685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.959 [2024-11-19 10:23:54.295476] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:34.959 [2024-11-19 10:23:54.295571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.295617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.295634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f0070 with addr=10.0.0.3, port=4420 00:23:34.959 [2024-11-19 10:23:54.295644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f0070 is same with the state(5) to be set 00:23:34.959 [2024-11-19 10:23:54.295661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f0070 (9): Bad file descriptor 00:23:34.959 [2024-11-19 10:23:54.295675] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:34.959 [2024-11-19 10:23:54.295684] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:34.959 [2024-11-19 10:23:54.295694] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:34.959 [2024-11-19 10:23:54.295709] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.959 [2024-11-19 10:23:54.298522] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:34.959 [2024-11-19 10:23:54.298613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.298659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.298675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x542d00 with addr=10.0.0.2, port=4420 00:23:34.959 [2024-11-19 10:23:54.298685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542d00 is same with the state(5) to be set 00:23:34.959 [2024-11-19 10:23:54.298702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x542d00 (9): Bad file descriptor 00:23:34.959 [2024-11-19 10:23:54.298716] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:34.959 [2024-11-19 10:23:54.298725] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:34.959 [2024-11-19 10:23:54.298734] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:34.959 [2024-11-19 10:23:54.298749] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.959 [2024-11-19 10:23:54.305538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:34.959 [2024-11-19 10:23:54.305650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.305698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.305714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f0070 with addr=10.0.0.3, port=4420 00:23:34.959 [2024-11-19 10:23:54.305725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f0070 is same with the state(5) to be set 00:23:34.959 [2024-11-19 10:23:54.305742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f0070 (9): Bad file descriptor 00:23:34.959 [2024-11-19 10:23:54.305758] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:34.959 [2024-11-19 10:23:54.305767] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:34.959 [2024-11-19 10:23:54.305776] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:34.959 [2024-11-19 10:23:54.305791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.959 [2024-11-19 10:23:54.308582] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:34.959 [2024-11-19 10:23:54.308671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.308716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.308732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x542d00 with addr=10.0.0.2, port=4420 00:23:34.959 [2024-11-19 10:23:54.308742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542d00 is same with the state(5) to be set 00:23:34.959 [2024-11-19 10:23:54.308759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x542d00 (9): Bad file descriptor 00:23:34.959 [2024-11-19 10:23:54.308773] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:34.959 [2024-11-19 10:23:54.308782] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:34.959 [2024-11-19 10:23:54.308791] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:34.959 [2024-11-19 10:23:54.308805] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.959 [2024-11-19 10:23:54.315614] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:34.959 [2024-11-19 10:23:54.315702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.315747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.315763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f0070 with addr=10.0.0.3, port=4420 00:23:34.959 [2024-11-19 10:23:54.315773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f0070 is same with the state(5) to be set 00:23:34.959 [2024-11-19 10:23:54.315790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f0070 (9): Bad file descriptor 00:23:34.959 [2024-11-19 10:23:54.315804] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:34.959 [2024-11-19 10:23:54.315813] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:34.959 [2024-11-19 10:23:54.315838] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:34.959 [2024-11-19 10:23:54.315856] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.959 [2024-11-19 10:23:54.318638] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:34.959 [2024-11-19 10:23:54.318722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.318766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.318781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x542d00 with addr=10.0.0.2, port=4420 00:23:34.959 [2024-11-19 10:23:54.318792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542d00 is same with the state(5) to be set 00:23:34.959 [2024-11-19 10:23:54.318808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x542d00 (9): Bad file descriptor 00:23:34.959 [2024-11-19 10:23:54.318834] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:34.959 [2024-11-19 10:23:54.318845] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:34.959 [2024-11-19 10:23:54.318854] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:34.959 [2024-11-19 10:23:54.318869] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.959 [2024-11-19 10:23:54.325672] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:34.959 [2024-11-19 10:23:54.325761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.325806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.325835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f0070 with addr=10.0.0.3, port=4420 00:23:34.959 [2024-11-19 10:23:54.325847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f0070 is same with the state(5) to be set 00:23:34.959 [2024-11-19 10:23:54.325864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f0070 (9): Bad file descriptor 00:23:34.959 [2024-11-19 10:23:54.325879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:34.959 [2024-11-19 10:23:54.325888] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:34.959 [2024-11-19 10:23:54.325897] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:34.959 [2024-11-19 10:23:54.325913] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.959 [2024-11-19 10:23:54.328694] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:34.959 [2024-11-19 10:23:54.328780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.328838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.959 [2024-11-19 10:23:54.328857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x542d00 with addr=10.0.0.2, port=4420 00:23:34.959 [2024-11-19 10:23:54.328867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542d00 is same with the state(5) to be set 00:23:34.959 [2024-11-19 10:23:54.328884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x542d00 (9): Bad file descriptor 00:23:34.959 [2024-11-19 10:23:54.328898] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:34.959 [2024-11-19 10:23:54.328907] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:34.960 [2024-11-19 10:23:54.328916] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:34.960 [2024-11-19 10:23:54.328931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.960 [2024-11-19 10:23:54.335731] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:34.960 [2024-11-19 10:23:54.335840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.335891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.335907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f0070 with addr=10.0.0.3, port=4420 00:23:34.960 [2024-11-19 10:23:54.335918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f0070 is same with the state(5) to be set 00:23:34.960 [2024-11-19 10:23:54.335935] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f0070 (9): Bad file descriptor 00:23:34.960 [2024-11-19 10:23:54.335951] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:34.960 [2024-11-19 10:23:54.335959] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:34.960 [2024-11-19 10:23:54.335969] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:34.960 [2024-11-19 10:23:54.335985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.960 [2024-11-19 10:23:54.338752] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:34.960 [2024-11-19 10:23:54.338869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.338918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.338934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x542d00 with addr=10.0.0.2, port=4420 00:23:34.960 [2024-11-19 10:23:54.338945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542d00 is same with the state(5) to be set 00:23:34.960 [2024-11-19 10:23:54.338962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x542d00 (9): Bad file descriptor 00:23:34.960 [2024-11-19 10:23:54.338977] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:34.960 [2024-11-19 10:23:54.338986] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:34.960 [2024-11-19 10:23:54.338995] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:34.960 [2024-11-19 10:23:54.339024] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.960 [2024-11-19 10:23:54.345790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:34.960 [2024-11-19 10:23:54.345895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.345942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.345958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f0070 with addr=10.0.0.3, port=4420 00:23:34.960 [2024-11-19 10:23:54.345969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f0070 is same with the state(5) to be set 00:23:34.960 [2024-11-19 10:23:54.345985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f0070 (9): Bad file descriptor 00:23:34.960 [2024-11-19 10:23:54.346000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:34.960 [2024-11-19 10:23:54.346008] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:34.960 [2024-11-19 10:23:54.346018] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:34.960 [2024-11-19 10:23:54.346033] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.960 [2024-11-19 10:23:54.348832] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:34.960 [2024-11-19 10:23:54.348919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.348965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.348981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x542d00 with addr=10.0.0.2, port=4420 00:23:34.960 [2024-11-19 10:23:54.348992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542d00 is same with the state(5) to be set 00:23:34.960 [2024-11-19 10:23:54.349009] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x542d00 (9): Bad file descriptor 00:23:34.960 [2024-11-19 10:23:54.349023] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:34.960 [2024-11-19 10:23:54.349031] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:34.960 [2024-11-19 10:23:54.349041] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:34.960 [2024-11-19 10:23:54.349055] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.960 [2024-11-19 10:23:54.355862] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:34.960 [2024-11-19 10:23:54.355950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.355996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.356011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f0070 with addr=10.0.0.3, port=4420 00:23:34.960 [2024-11-19 10:23:54.356022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f0070 is same with the state(5) to be set 00:23:34.960 [2024-11-19 10:23:54.356039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f0070 (9): Bad file descriptor 00:23:34.960 [2024-11-19 10:23:54.356053] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:34.960 [2024-11-19 10:23:54.356062] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:34.960 [2024-11-19 10:23:54.356071] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:34.960 [2024-11-19 10:23:54.356086] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.960 [2024-11-19 10:23:54.358888] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:34.960 [2024-11-19 10:23:54.358972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.359033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.359051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x542d00 with addr=10.0.0.2, port=4420 00:23:34.960 [2024-11-19 10:23:54.359062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542d00 is same with the state(5) to be set 00:23:34.960 [2024-11-19 10:23:54.359080] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x542d00 (9): Bad file descriptor 00:23:34.960 [2024-11-19 10:23:54.359095] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:34.960 [2024-11-19 10:23:54.359103] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:34.960 [2024-11-19 10:23:54.359112] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:34.960 [2024-11-19 10:23:54.359127] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.960 [2024-11-19 10:23:54.365920] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:34.960 [2024-11-19 10:23:54.366007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.366052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.366068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f0070 with addr=10.0.0.3, port=4420 00:23:34.960 [2024-11-19 10:23:54.366079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f0070 is same with the state(5) to be set 00:23:34.960 [2024-11-19 10:23:54.366096] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f0070 (9): Bad file descriptor 00:23:34.960 [2024-11-19 10:23:54.366110] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:34.960 [2024-11-19 10:23:54.366119] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:34.960 [2024-11-19 10:23:54.366128] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:34.960 [2024-11-19 10:23:54.366143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.960 [2024-11-19 10:23:54.368943] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:34.960 [2024-11-19 10:23:54.369043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.369090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.369106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x542d00 with addr=10.0.0.2, port=4420 00:23:34.960 [2024-11-19 10:23:54.369117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542d00 is same with the state(5) to be set 00:23:34.960 [2024-11-19 10:23:54.369134] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x542d00 (9): Bad file descriptor 00:23:34.960 [2024-11-19 10:23:54.369149] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:34.960 [2024-11-19 10:23:54.369163] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:34.960 [2024-11-19 10:23:54.369176] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:34.960 [2024-11-19 10:23:54.369192] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.960 [2024-11-19 10:23:54.375978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:34.960 [2024-11-19 10:23:54.376067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.376113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.960 [2024-11-19 10:23:54.376128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f0070 with addr=10.0.0.3, port=4420 00:23:34.960 [2024-11-19 10:23:54.376139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f0070 is same with the state(5) to be set 00:23:34.961 [2024-11-19 10:23:54.376156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f0070 (9): Bad file descriptor 00:23:34.961 [2024-11-19 10:23:54.376170] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:34.961 [2024-11-19 10:23:54.376179] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:34.961 [2024-11-19 10:23:54.376188] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:34.961 [2024-11-19 10:23:54.376203] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.961 [2024-11-19 10:23:54.379015] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:34.961 [2024-11-19 10:23:54.379100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.961 [2024-11-19 10:23:54.379145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:34.961 [2024-11-19 10:23:54.379161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x542d00 with addr=10.0.0.2, port=4420 00:23:34.961 [2024-11-19 10:23:54.379171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x542d00 is same with the state(5) to be set 00:23:34.961 [2024-11-19 10:23:54.379188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x542d00 (9): Bad file descriptor 00:23:34.961 [2024-11-19 10:23:54.379202] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:34.961 [2024-11-19 10:23:54.379211] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:34.961 [2024-11-19 10:23:54.379220] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:34.961 [2024-11-19 10:23:54.379234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.961 [2024-11-19 10:23:54.381268] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:34.961 [2024-11-19 10:23:54.381305] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:34.961 [2024-11-19 10:23:54.381329] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:34.961 [2024-11-19 10:23:54.381367] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:34.961 [2024-11-19 10:23:54.381384] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:34.961 [2024-11-19 10:23:54.381398] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:34.961 [2024-11-19 10:23:54.467367] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:34.961 [2024-11-19 10:23:54.467436] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:35.895 10:23:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@68 -- # sort 00:23:35.895 10:23:55 -- common/autotest_common.sh@10 -- # set +x 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@68 -- # xargs 00:23:35.895 10:23:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:35.895 10:23:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.895 10:23:55 -- common/autotest_common.sh@10 -- # set +x 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@64 -- # sort 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@64 -- # xargs 00:23:35.895 10:23:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@72 -- # xargs 00:23:35.895 10:23:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.895 10:23:55 -- common/autotest_common.sh@10 -- # set +x 00:23:35.895 10:23:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:35.895 10:23:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.895 10:23:55 -- host/mdns_discovery.sh@72 -- # xargs 00:23:35.895 10:23:55 -- common/autotest_common.sh@10 -- # set +x 00:23:35.895 10:23:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.153 10:23:55 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:36.153 10:23:55 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:36.153 10:23:55 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:36.153 10:23:55 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:36.153 10:23:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.153 10:23:55 -- common/autotest_common.sh@10 -- # set +x 00:23:36.153 10:23:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.153 10:23:55 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:36.153 10:23:55 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:36.153 10:23:55 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:36.153 10:23:55 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:36.153 10:23:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.153 10:23:55 -- common/autotest_common.sh@10 -- # set +x 00:23:36.153 10:23:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.153 10:23:55 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:36.153 [2024-11-19 10:23:55.622716] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:37.086 10:23:56 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:37.086 10:23:56 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:37.086 10:23:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.086 10:23:56 -- host/mdns_discovery.sh@80 -- # xargs 00:23:37.086 10:23:56 -- common/autotest_common.sh@10 -- # set +x 00:23:37.086 10:23:56 -- host/mdns_discovery.sh@80 -- # sort 00:23:37.086 10:23:56 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:37.086 10:23:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.086 10:23:56 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:37.086 10:23:56 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:37.086 10:23:56 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:37.086 10:23:56 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:37.086 10:23:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.086 10:23:56 -- common/autotest_common.sh@10 -- # set +x 00:23:37.086 10:23:56 -- host/mdns_discovery.sh@68 -- # sort 00:23:37.086 10:23:56 -- host/mdns_discovery.sh@68 -- # xargs 00:23:37.086 10:23:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.344 10:23:56 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:37.345 10:23:56 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:37.345 10:23:56 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.345 10:23:56 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:37.345 10:23:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.345 10:23:56 -- host/mdns_discovery.sh@64 -- # sort 00:23:37.345 10:23:56 -- common/autotest_common.sh@10 -- # set +x 00:23:37.345 10:23:56 -- host/mdns_discovery.sh@64 -- # xargs 00:23:37.345 10:23:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.345 10:23:56 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:37.345 10:23:56 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:37.345 10:23:56 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:37.345 10:23:56 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:37.345 10:23:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.345 10:23:56 -- common/autotest_common.sh@10 -- # set +x 00:23:37.345 10:23:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.345 10:23:56 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:37.345 10:23:56 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:37.345 10:23:56 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:37.345 10:23:56 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:37.345 10:23:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.345 10:23:56 -- common/autotest_common.sh@10 -- # set +x 00:23:37.345 10:23:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.345 10:23:56 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:37.345 10:23:56 -- common/autotest_common.sh@650 -- # local es=0 00:23:37.345 10:23:56 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:37.345 10:23:56 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:37.345 10:23:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:37.345 10:23:56 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:37.345 10:23:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:37.345 10:23:56 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:37.345 10:23:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.345 10:23:56 -- common/autotest_common.sh@10 -- # set +x 00:23:37.345 [2024-11-19 10:23:56.819835] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:37.345 2024/11/19 10:23:56 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:37.345 request: 00:23:37.345 { 00:23:37.345 "method": "bdev_nvme_start_mdns_discovery", 00:23:37.345 "params": { 00:23:37.345 "name": "mdns", 00:23:37.345 "svcname": "_nvme-disc._http", 00:23:37.345 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:37.345 } 00:23:37.345 } 00:23:37.345 Got JSON-RPC error response 00:23:37.345 GoRPCClient: error on JSON-RPC call 00:23:37.345 10:23:56 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:37.345 10:23:56 -- common/autotest_common.sh@653 -- # es=1 00:23:37.345 10:23:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:37.345 10:23:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:37.345 10:23:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:37.345 10:23:56 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:37.911 [2024-11-19 10:23:57.208406] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:37.911 [2024-11-19 10:23:57.308404] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:37.911 [2024-11-19 10:23:57.408414] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:37.911 [2024-11-19 10:23:57.408644] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:23:37.911 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:37.911 cookie is 0 00:23:37.911 is_local: 1 00:23:37.911 our_own: 0 00:23:37.911 wide_area: 0 00:23:37.911 multicast: 1 00:23:37.911 cached: 1 00:23:38.168 [2024-11-19 10:23:57.508414] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:38.168 [2024-11-19 10:23:57.508671] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:23:38.168 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:38.168 cookie is 0 00:23:38.168 is_local: 1 00:23:38.168 our_own: 0 00:23:38.168 wide_area: 0 00:23:38.168 multicast: 1 00:23:38.168 cached: 1 00:23:39.103 [2024-11-19 10:23:58.420545] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:39.103 [2024-11-19 10:23:58.420757] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:39.103 [2024-11-19 10:23:58.420842] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:39.103 [2024-11-19 10:23:58.506672] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:39.103 [2024-11-19 10:23:58.520449] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:39.103 [2024-11-19 10:23:58.520597] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:39.103 [2024-11-19 10:23:58.520657] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:39.103 [2024-11-19 10:23:58.575973] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:39.103 [2024-11-19 10:23:58.576244] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:39.103 [2024-11-19 10:23:58.606457] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:39.361 [2024-11-19 10:23:58.665840] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:39.361 [2024-11-19 10:23:58.666052] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:42.705 10:24:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.705 10:24:01 -- common/autotest_common.sh@10 -- # set +x 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@80 -- # xargs 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@80 -- # sort 00:23:42.705 10:24:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:42.705 10:24:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@76 -- # xargs 00:23:42.705 10:24:01 -- common/autotest_common.sh@10 -- # set +x 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@76 -- # sort 00:23:42.705 10:24:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:42.705 10:24:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.705 10:24:01 -- common/autotest_common.sh@10 -- # set +x 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@64 -- # sort 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@64 -- # xargs 00:23:42.705 10:24:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:42.705 10:24:01 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:42.705 10:24:01 -- common/autotest_common.sh@650 -- # local es=0 00:23:42.705 10:24:01 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:42.705 10:24:01 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:42.705 10:24:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:42.705 10:24:01 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:42.705 10:24:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:42.705 10:24:01 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:42.705 10:24:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.705 10:24:01 -- common/autotest_common.sh@10 -- # set +x 00:23:42.705 [2024-11-19 10:24:02.000533] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:42.705 2024/11/19 10:24:02 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:42.705 request: 00:23:42.705 { 00:23:42.705 "method": "bdev_nvme_start_mdns_discovery", 00:23:42.705 "params": { 00:23:42.705 "name": "cdc", 00:23:42.705 "svcname": "_nvme-disc._tcp", 00:23:42.705 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:42.705 } 00:23:42.705 } 00:23:42.705 Got JSON-RPC error response 00:23:42.705 GoRPCClient: error on JSON-RPC call 00:23:42.705 10:24:02 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:42.705 10:24:02 -- common/autotest_common.sh@653 -- # es=1 00:23:42.705 10:24:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:42.705 10:24:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:42.705 10:24:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:42.705 10:24:02 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:42.705 10:24:02 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:42.705 10:24:02 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:42.705 10:24:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.705 10:24:02 -- host/mdns_discovery.sh@76 -- # sort 00:23:42.705 10:24:02 -- common/autotest_common.sh@10 -- # set +x 00:23:42.705 10:24:02 -- host/mdns_discovery.sh@76 -- # xargs 00:23:42.705 10:24:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.705 10:24:02 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:42.706 10:24:02 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:42.706 10:24:02 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.706 10:24:02 -- host/mdns_discovery.sh@64 -- # sort 00:23:42.706 10:24:02 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:42.706 10:24:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.706 10:24:02 -- common/autotest_common.sh@10 -- # set +x 00:23:42.706 10:24:02 -- host/mdns_discovery.sh@64 -- # xargs 00:23:42.706 10:24:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.706 10:24:02 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:42.706 10:24:02 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:42.706 10:24:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.706 10:24:02 -- common/autotest_common.sh@10 -- # set +x 00:23:42.706 10:24:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.706 10:24:02 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:42.706 10:24:02 -- host/mdns_discovery.sh@197 -- # kill 97687 00:23:42.706 10:24:02 -- host/mdns_discovery.sh@200 -- # wait 97687 00:23:42.706 [2024-11-19 10:24:02.208390] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:42.964 10:24:02 -- host/mdns_discovery.sh@201 -- # kill 97755 00:23:42.965 Got SIGTERM, quitting. 00:23:42.965 10:24:02 -- host/mdns_discovery.sh@202 -- # kill 97704 00:23:42.965 10:24:02 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:42.965 10:24:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:42.965 10:24:02 -- nvmf/common.sh@116 -- # sync 00:23:42.965 Got SIGTERM, quitting. 00:23:42.965 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:42.965 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:42.965 avahi-daemon 0.8 exiting. 00:23:42.965 10:24:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:42.965 10:24:02 -- nvmf/common.sh@119 -- # set +e 00:23:42.965 10:24:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:42.965 10:24:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:42.965 rmmod nvme_tcp 00:23:42.965 rmmod nvme_fabrics 00:23:42.965 rmmod nvme_keyring 00:23:42.965 10:24:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:42.965 10:24:02 -- nvmf/common.sh@123 -- # set -e 00:23:42.965 10:24:02 -- nvmf/common.sh@124 -- # return 0 00:23:42.965 10:24:02 -- nvmf/common.sh@477 -- # '[' -n 97651 ']' 00:23:42.965 10:24:02 -- nvmf/common.sh@478 -- # killprocess 97651 00:23:42.965 10:24:02 -- common/autotest_common.sh@936 -- # '[' -z 97651 ']' 00:23:42.965 10:24:02 -- common/autotest_common.sh@940 -- # kill -0 97651 00:23:42.965 10:24:02 -- common/autotest_common.sh@941 -- # uname 00:23:42.965 10:24:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:42.965 10:24:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97651 00:23:42.965 killing process with pid 97651 00:23:42.965 10:24:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:42.965 10:24:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:42.965 10:24:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97651' 00:23:42.965 10:24:02 -- common/autotest_common.sh@955 -- # kill 97651 00:23:42.965 10:24:02 -- common/autotest_common.sh@960 -- # wait 97651 00:23:43.224 10:24:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:43.224 10:24:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:43.224 10:24:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:43.224 10:24:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:43.224 10:24:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:43.224 10:24:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.224 10:24:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.224 10:24:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.224 10:24:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:43.224 00:23:43.224 real 0m19.165s 00:23:43.224 user 0m38.029s 00:23:43.224 sys 0m1.802s 00:23:43.224 ************************************ 00:23:43.224 END TEST nvmf_mdns_discovery 00:23:43.224 ************************************ 00:23:43.224 10:24:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:43.224 10:24:02 -- common/autotest_common.sh@10 -- # set +x 00:23:43.224 10:24:02 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:43.224 10:24:02 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:43.224 10:24:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:43.224 10:24:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:43.224 10:24:02 -- common/autotest_common.sh@10 -- # set +x 00:23:43.224 ************************************ 00:23:43.224 START TEST nvmf_multipath 00:23:43.224 ************************************ 00:23:43.224 10:24:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:43.224 * Looking for test storage... 00:23:43.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:43.224 10:24:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:43.224 10:24:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:43.224 10:24:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:43.483 10:24:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:43.483 10:24:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:43.483 10:24:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:43.483 10:24:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:43.483 10:24:02 -- scripts/common.sh@335 -- # IFS=.-: 00:23:43.483 10:24:02 -- scripts/common.sh@335 -- # read -ra ver1 00:23:43.483 10:24:02 -- scripts/common.sh@336 -- # IFS=.-: 00:23:43.483 10:24:02 -- scripts/common.sh@336 -- # read -ra ver2 00:23:43.483 10:24:02 -- scripts/common.sh@337 -- # local 'op=<' 00:23:43.483 10:24:02 -- scripts/common.sh@339 -- # ver1_l=2 00:23:43.483 10:24:02 -- scripts/common.sh@340 -- # ver2_l=1 00:23:43.483 10:24:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:43.483 10:24:02 -- scripts/common.sh@343 -- # case "$op" in 00:23:43.483 10:24:02 -- scripts/common.sh@344 -- # : 1 00:23:43.483 10:24:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:43.483 10:24:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:43.483 10:24:02 -- scripts/common.sh@364 -- # decimal 1 00:23:43.483 10:24:02 -- scripts/common.sh@352 -- # local d=1 00:23:43.483 10:24:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:43.483 10:24:02 -- scripts/common.sh@354 -- # echo 1 00:23:43.483 10:24:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:43.483 10:24:02 -- scripts/common.sh@365 -- # decimal 2 00:23:43.483 10:24:02 -- scripts/common.sh@352 -- # local d=2 00:23:43.483 10:24:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:43.483 10:24:02 -- scripts/common.sh@354 -- # echo 2 00:23:43.483 10:24:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:43.483 10:24:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:43.483 10:24:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:43.483 10:24:02 -- scripts/common.sh@367 -- # return 0 00:23:43.483 10:24:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:43.483 10:24:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:43.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.483 --rc genhtml_branch_coverage=1 00:23:43.483 --rc genhtml_function_coverage=1 00:23:43.483 --rc genhtml_legend=1 00:23:43.483 --rc geninfo_all_blocks=1 00:23:43.483 --rc geninfo_unexecuted_blocks=1 00:23:43.483 00:23:43.483 ' 00:23:43.483 10:24:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:43.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.483 --rc genhtml_branch_coverage=1 00:23:43.483 --rc genhtml_function_coverage=1 00:23:43.483 --rc genhtml_legend=1 00:23:43.483 --rc geninfo_all_blocks=1 00:23:43.483 --rc geninfo_unexecuted_blocks=1 00:23:43.483 00:23:43.483 ' 00:23:43.483 10:24:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:43.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.483 --rc genhtml_branch_coverage=1 00:23:43.483 --rc genhtml_function_coverage=1 00:23:43.483 --rc genhtml_legend=1 00:23:43.483 --rc geninfo_all_blocks=1 00:23:43.483 --rc geninfo_unexecuted_blocks=1 00:23:43.483 00:23:43.483 ' 00:23:43.483 10:24:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:43.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:43.483 --rc genhtml_branch_coverage=1 00:23:43.483 --rc genhtml_function_coverage=1 00:23:43.483 --rc genhtml_legend=1 00:23:43.483 --rc geninfo_all_blocks=1 00:23:43.483 --rc geninfo_unexecuted_blocks=1 00:23:43.483 00:23:43.483 ' 00:23:43.483 10:24:02 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:43.483 10:24:02 -- nvmf/common.sh@7 -- # uname -s 00:23:43.483 10:24:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.483 10:24:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.483 10:24:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.483 10:24:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.483 10:24:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.483 10:24:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.483 10:24:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.483 10:24:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.483 10:24:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.483 10:24:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.483 10:24:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:23:43.483 10:24:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:23:43.483 10:24:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.483 10:24:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.483 10:24:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:43.483 10:24:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:43.483 10:24:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.483 10:24:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.483 10:24:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.483 10:24:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.483 10:24:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.483 10:24:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.483 10:24:02 -- paths/export.sh@5 -- # export PATH 00:23:43.483 10:24:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.483 10:24:02 -- nvmf/common.sh@46 -- # : 0 00:23:43.483 10:24:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:43.483 10:24:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:43.483 10:24:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:43.483 10:24:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.483 10:24:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.483 10:24:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:43.483 10:24:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:43.483 10:24:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:43.483 10:24:02 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:43.483 10:24:02 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:43.483 10:24:02 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:43.483 10:24:02 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:43.483 10:24:02 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:43.483 10:24:02 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:43.483 10:24:02 -- host/multipath.sh@30 -- # nvmftestinit 00:23:43.483 10:24:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:43.483 10:24:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.483 10:24:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:43.483 10:24:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:43.483 10:24:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:43.483 10:24:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.483 10:24:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.483 10:24:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.483 10:24:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:43.483 10:24:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:43.483 10:24:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:43.483 10:24:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:43.483 10:24:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:43.483 10:24:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:43.483 10:24:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.483 10:24:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.483 10:24:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:43.483 10:24:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:43.483 10:24:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:43.483 10:24:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:43.483 10:24:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:43.483 10:24:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.483 10:24:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:43.483 10:24:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:43.484 10:24:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:43.484 10:24:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:43.484 10:24:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:43.484 10:24:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:43.484 Cannot find device "nvmf_tgt_br" 00:23:43.484 10:24:02 -- nvmf/common.sh@154 -- # true 00:23:43.484 10:24:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:43.484 Cannot find device "nvmf_tgt_br2" 00:23:43.484 10:24:02 -- nvmf/common.sh@155 -- # true 00:23:43.484 10:24:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:43.484 10:24:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:43.484 Cannot find device "nvmf_tgt_br" 00:23:43.484 10:24:02 -- nvmf/common.sh@157 -- # true 00:23:43.484 10:24:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:43.484 Cannot find device "nvmf_tgt_br2" 00:23:43.484 10:24:02 -- nvmf/common.sh@158 -- # true 00:23:43.484 10:24:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:43.484 10:24:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:43.484 10:24:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:43.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:43.484 10:24:03 -- nvmf/common.sh@161 -- # true 00:23:43.484 10:24:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:43.484 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:43.484 10:24:03 -- nvmf/common.sh@162 -- # true 00:23:43.484 10:24:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:43.484 10:24:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:43.742 10:24:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:43.742 10:24:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:43.742 10:24:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:43.742 10:24:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:43.742 10:24:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:43.742 10:24:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:43.742 10:24:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:43.742 10:24:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:43.742 10:24:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:43.742 10:24:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:43.742 10:24:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:43.742 10:24:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:43.742 10:24:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:43.742 10:24:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:43.742 10:24:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:43.742 10:24:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:43.742 10:24:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:43.742 10:24:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:43.742 10:24:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:43.742 10:24:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:43.742 10:24:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:43.742 10:24:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:43.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:23:43.742 00:23:43.742 --- 10.0.0.2 ping statistics --- 00:23:43.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.742 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:43.742 10:24:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:43.742 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:43.742 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:23:43.742 00:23:43.742 --- 10.0.0.3 ping statistics --- 00:23:43.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.742 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:43.742 10:24:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:43.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:23:43.742 00:23:43.742 --- 10.0.0.1 ping statistics --- 00:23:43.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.742 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:43.742 10:24:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.742 10:24:03 -- nvmf/common.sh@421 -- # return 0 00:23:43.742 10:24:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:43.742 10:24:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.742 10:24:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:43.742 10:24:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:43.742 10:24:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.742 10:24:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:43.742 10:24:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:43.742 10:24:03 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:43.742 10:24:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:43.742 10:24:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:43.742 10:24:03 -- common/autotest_common.sh@10 -- # set +x 00:23:43.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.742 10:24:03 -- nvmf/common.sh@469 -- # nvmfpid=98275 00:23:43.742 10:24:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:43.742 10:24:03 -- nvmf/common.sh@470 -- # waitforlisten 98275 00:23:43.742 10:24:03 -- common/autotest_common.sh@829 -- # '[' -z 98275 ']' 00:23:43.742 10:24:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.742 10:24:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:43.742 10:24:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.742 10:24:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:43.742 10:24:03 -- common/autotest_common.sh@10 -- # set +x 00:23:44.001 [2024-11-19 10:24:03.291023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:44.001 [2024-11-19 10:24:03.291351] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.001 [2024-11-19 10:24:03.429091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:44.001 [2024-11-19 10:24:03.469610] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:44.001 [2024-11-19 10:24:03.470002] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.001 [2024-11-19 10:24:03.470071] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.001 [2024-11-19 10:24:03.470230] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.001 [2024-11-19 10:24:03.470800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.001 [2024-11-19 10:24:03.470856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.937 10:24:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:44.937 10:24:04 -- common/autotest_common.sh@862 -- # return 0 00:23:44.937 10:24:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:44.937 10:24:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:44.937 10:24:04 -- common/autotest_common.sh@10 -- # set +x 00:23:44.937 10:24:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.937 10:24:04 -- host/multipath.sh@33 -- # nvmfapp_pid=98275 00:23:44.937 10:24:04 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:45.196 [2024-11-19 10:24:04.600558] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.196 10:24:04 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:45.454 Malloc0 00:23:45.454 10:24:04 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:45.713 10:24:05 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:45.971 10:24:05 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.229 [2024-11-19 10:24:05.727318] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.229 10:24:05 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:46.487 [2024-11-19 10:24:05.979512] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:46.487 10:24:06 -- host/multipath.sh@44 -- # bdevperf_pid=98373 00:23:46.487 10:24:06 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:46.487 10:24:06 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:46.487 10:24:06 -- host/multipath.sh@47 -- # waitforlisten 98373 /var/tmp/bdevperf.sock 00:23:46.487 10:24:06 -- common/autotest_common.sh@829 -- # '[' -z 98373 ']' 00:23:46.487 10:24:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:46.487 10:24:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.487 10:24:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:46.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:46.487 10:24:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.487 10:24:06 -- common/autotest_common.sh@10 -- # set +x 00:23:47.054 10:24:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:47.054 10:24:06 -- common/autotest_common.sh@862 -- # return 0 00:23:47.054 10:24:06 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:47.054 10:24:06 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:47.626 Nvme0n1 00:23:47.626 10:24:06 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:47.889 Nvme0n1 00:23:47.889 10:24:07 -- host/multipath.sh@78 -- # sleep 1 00:23:47.889 10:24:07 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:49.263 10:24:08 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:49.263 10:24:08 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:49.263 10:24:08 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:49.522 10:24:09 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:49.522 10:24:09 -- host/multipath.sh@65 -- # dtrace_pid=98452 00:23:49.522 10:24:09 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98275 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:49.522 10:24:09 -- host/multipath.sh@66 -- # sleep 6 00:23:56.084 10:24:15 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:56.084 10:24:15 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:56.084 10:24:15 -- host/multipath.sh@67 -- # active_port=4421 00:23:56.084 10:24:15 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:56.084 Attaching 4 probes... 00:23:56.084 @path[10.0.0.2, 4421]: 18022 00:23:56.084 @path[10.0.0.2, 4421]: 18717 00:23:56.084 @path[10.0.0.2, 4421]: 18623 00:23:56.084 @path[10.0.0.2, 4421]: 18592 00:23:56.084 @path[10.0.0.2, 4421]: 18620 00:23:56.084 10:24:15 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:56.084 10:24:15 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:56.084 10:24:15 -- host/multipath.sh@69 -- # sed -n 1p 00:23:56.084 10:24:15 -- host/multipath.sh@69 -- # port=4421 00:23:56.084 10:24:15 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:56.084 10:24:15 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:56.084 10:24:15 -- host/multipath.sh@72 -- # kill 98452 00:23:56.084 10:24:15 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:56.084 10:24:15 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:56.084 10:24:15 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:56.343 10:24:15 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:56.600 10:24:15 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:56.600 10:24:15 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98275 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:56.600 10:24:15 -- host/multipath.sh@65 -- # dtrace_pid=98587 00:23:56.600 10:24:15 -- host/multipath.sh@66 -- # sleep 6 00:24:03.159 10:24:21 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:03.159 10:24:21 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:03.159 10:24:22 -- host/multipath.sh@67 -- # active_port=4420 00:24:03.159 10:24:22 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:03.159 Attaching 4 probes... 00:24:03.159 @path[10.0.0.2, 4420]: 18338 00:24:03.159 @path[10.0.0.2, 4420]: 18221 00:24:03.159 @path[10.0.0.2, 4420]: 18759 00:24:03.159 @path[10.0.0.2, 4420]: 18464 00:24:03.159 @path[10.0.0.2, 4420]: 18715 00:24:03.159 10:24:22 -- host/multipath.sh@69 -- # sed -n 1p 00:24:03.159 10:24:22 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:03.159 10:24:22 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:03.159 10:24:22 -- host/multipath.sh@69 -- # port=4420 00:24:03.159 10:24:22 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:03.159 10:24:22 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:03.159 10:24:22 -- host/multipath.sh@72 -- # kill 98587 00:24:03.159 10:24:22 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:03.159 10:24:22 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:24:03.159 10:24:22 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:03.159 10:24:22 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:03.417 10:24:22 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:24:03.417 10:24:22 -- host/multipath.sh@65 -- # dtrace_pid=98723 00:24:03.417 10:24:22 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98275 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:03.417 10:24:22 -- host/multipath.sh@66 -- # sleep 6 00:24:09.977 10:24:28 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:09.977 10:24:28 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:09.977 10:24:29 -- host/multipath.sh@67 -- # active_port=4421 00:24:09.977 10:24:29 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:09.977 Attaching 4 probes... 00:24:09.977 @path[10.0.0.2, 4421]: 15845 00:24:09.977 @path[10.0.0.2, 4421]: 18070 00:24:09.977 @path[10.0.0.2, 4421]: 18213 00:24:09.977 @path[10.0.0.2, 4421]: 18359 00:24:09.977 @path[10.0.0.2, 4421]: 18367 00:24:09.977 10:24:29 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:09.977 10:24:29 -- host/multipath.sh@69 -- # sed -n 1p 00:24:09.977 10:24:29 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:09.977 10:24:29 -- host/multipath.sh@69 -- # port=4421 00:24:09.977 10:24:29 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:09.977 10:24:29 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:09.977 10:24:29 -- host/multipath.sh@72 -- # kill 98723 00:24:09.977 10:24:29 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:09.977 10:24:29 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:24:09.977 10:24:29 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:09.977 10:24:29 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:10.543 10:24:29 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:24:10.543 10:24:29 -- host/multipath.sh@65 -- # dtrace_pid=98854 00:24:10.543 10:24:29 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98275 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:10.543 10:24:29 -- host/multipath.sh@66 -- # sleep 6 00:24:17.107 10:24:35 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:17.107 10:24:35 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:17.107 10:24:36 -- host/multipath.sh@67 -- # active_port= 00:24:17.107 10:24:36 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:17.107 Attaching 4 probes... 00:24:17.107 00:24:17.107 00:24:17.107 00:24:17.107 00:24:17.107 00:24:17.107 10:24:36 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:17.107 10:24:36 -- host/multipath.sh@69 -- # sed -n 1p 00:24:17.107 10:24:36 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:17.107 10:24:36 -- host/multipath.sh@69 -- # port= 00:24:17.107 10:24:36 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:17.107 10:24:36 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:17.107 10:24:36 -- host/multipath.sh@72 -- # kill 98854 00:24:17.107 10:24:36 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:17.107 10:24:36 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:17.107 10:24:36 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:17.107 10:24:36 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:17.366 10:24:36 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:17.366 10:24:36 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98275 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:17.366 10:24:36 -- host/multipath.sh@65 -- # dtrace_pid=98985 00:24:17.366 10:24:36 -- host/multipath.sh@66 -- # sleep 6 00:24:23.952 10:24:42 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:23.952 10:24:42 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:23.952 10:24:43 -- host/multipath.sh@67 -- # active_port=4421 00:24:23.952 10:24:43 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:23.952 Attaching 4 probes... 00:24:23.952 @path[10.0.0.2, 4421]: 17881 00:24:23.952 @path[10.0.0.2, 4421]: 17874 00:24:23.952 @path[10.0.0.2, 4421]: 18130 00:24:23.952 @path[10.0.0.2, 4421]: 18235 00:24:23.952 @path[10.0.0.2, 4421]: 18220 00:24:23.952 10:24:43 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:23.952 10:24:43 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:23.952 10:24:43 -- host/multipath.sh@69 -- # sed -n 1p 00:24:23.952 10:24:43 -- host/multipath.sh@69 -- # port=4421 00:24:23.952 10:24:43 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:23.952 10:24:43 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:23.952 10:24:43 -- host/multipath.sh@72 -- # kill 98985 00:24:23.952 10:24:43 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:23.952 10:24:43 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:23.952 [2024-11-19 10:24:43.396624] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396697] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396715] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396795] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396803] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396841] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396857] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396864] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396873] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396882] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396890] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396906] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396922] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396931] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396956] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.952 [2024-11-19 10:24:43.396979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.396987] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.396995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397003] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397022] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397029] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397038] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397046] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397091] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397114] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397122] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397130] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397138] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397146] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397154] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397170] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397178] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397194] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 [2024-11-19 10:24:43.397210] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d20 is same with the state(5) to be set 00:24:23.953 10:24:43 -- host/multipath.sh@101 -- # sleep 1 00:24:24.887 10:24:44 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:24:24.887 10:24:44 -- host/multipath.sh@65 -- # dtrace_pid=99121 00:24:24.887 10:24:44 -- host/multipath.sh@66 -- # sleep 6 00:24:24.887 10:24:44 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98275 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:31.449 10:24:50 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:31.449 10:24:50 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:31.449 10:24:50 -- host/multipath.sh@67 -- # active_port=4420 00:24:31.449 10:24:50 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:31.449 Attaching 4 probes... 00:24:31.449 @path[10.0.0.2, 4420]: 17570 00:24:31.449 @path[10.0.0.2, 4420]: 17892 00:24:31.449 @path[10.0.0.2, 4420]: 17961 00:24:31.449 @path[10.0.0.2, 4420]: 17721 00:24:31.449 @path[10.0.0.2, 4420]: 18144 00:24:31.449 10:24:50 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:31.449 10:24:50 -- host/multipath.sh@69 -- # sed -n 1p 00:24:31.449 10:24:50 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:31.449 10:24:50 -- host/multipath.sh@69 -- # port=4420 00:24:31.449 10:24:50 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:31.449 10:24:50 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:31.449 10:24:50 -- host/multipath.sh@72 -- # kill 99121 00:24:31.449 10:24:50 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:31.449 10:24:50 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:31.708 [2024-11-19 10:24:51.076929] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:31.708 10:24:51 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:31.966 10:24:51 -- host/multipath.sh@111 -- # sleep 6 00:24:38.545 10:24:57 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:38.545 10:24:57 -- host/multipath.sh@65 -- # dtrace_pid=99320 00:24:38.545 10:24:57 -- host/multipath.sh@66 -- # sleep 6 00:24:38.545 10:24:57 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98275 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:45.113 10:25:03 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:45.113 10:25:03 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:45.113 10:25:03 -- host/multipath.sh@67 -- # active_port=4421 00:24:45.113 10:25:03 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:45.113 Attaching 4 probes... 00:24:45.113 @path[10.0.0.2, 4421]: 17760 00:24:45.113 @path[10.0.0.2, 4421]: 18060 00:24:45.113 @path[10.0.0.2, 4421]: 18058 00:24:45.113 @path[10.0.0.2, 4421]: 17515 00:24:45.113 @path[10.0.0.2, 4421]: 17112 00:24:45.113 10:25:03 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:45.113 10:25:03 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:45.113 10:25:03 -- host/multipath.sh@69 -- # sed -n 1p 00:24:45.113 10:25:03 -- host/multipath.sh@69 -- # port=4421 00:24:45.113 10:25:03 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:45.113 10:25:03 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:45.113 10:25:03 -- host/multipath.sh@72 -- # kill 99320 00:24:45.113 10:25:03 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:45.113 10:25:03 -- host/multipath.sh@114 -- # killprocess 98373 00:24:45.113 10:25:03 -- common/autotest_common.sh@936 -- # '[' -z 98373 ']' 00:24:45.113 10:25:03 -- common/autotest_common.sh@940 -- # kill -0 98373 00:24:45.113 10:25:03 -- common/autotest_common.sh@941 -- # uname 00:24:45.113 10:25:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:45.113 10:25:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98373 00:24:45.113 killing process with pid 98373 00:24:45.113 10:25:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:45.113 10:25:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:45.113 10:25:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98373' 00:24:45.113 10:25:03 -- common/autotest_common.sh@955 -- # kill 98373 00:24:45.113 10:25:03 -- common/autotest_common.sh@960 -- # wait 98373 00:24:45.113 Connection closed with partial response: 00:24:45.113 00:24:45.113 00:24:45.113 10:25:03 -- host/multipath.sh@116 -- # wait 98373 00:24:45.113 10:25:03 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:45.113 [2024-11-19 10:24:06.055383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:45.114 [2024-11-19 10:24:06.055497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98373 ] 00:24:45.114 [2024-11-19 10:24:06.195566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.114 [2024-11-19 10:24:06.234322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.114 Running I/O for 90 seconds... 00:24:45.114 [2024-11-19 10:24:15.936881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.114 [2024-11-19 10:24:15.936963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.937053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.114 [2024-11-19 10:24:15.937096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.937135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.937173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.114 [2024-11-19 10:24:15.937210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.937248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.937286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.937324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.937361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.937414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.937455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.937492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.937529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.937567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.937851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.937897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.937945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.937968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.937983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.938006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.114 [2024-11-19 10:24:15.938021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.938044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.938059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.938081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.114 [2024-11-19 10:24:15.938097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.938120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.938135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.938174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.938191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.938214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.114 [2024-11-19 10:24:15.938230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.938253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.114 [2024-11-19 10:24:15.938268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.938291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.938307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.938330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.938345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.938368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.938384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.938407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.114 [2024-11-19 10:24:15.938422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.941941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.114 [2024-11-19 10:24:15.941983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.942014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.942031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.942054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.942070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.942094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.114 [2024-11-19 10:24:15.942110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.942133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.114 [2024-11-19 10:24:15.942149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.942184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.114 [2024-11-19 10:24:15.942203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.942226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.114 [2024-11-19 10:24:15.942241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:45.114 [2024-11-19 10:24:15.942264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.115 [2024-11-19 10:24:15.942881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.942958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.942981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.115 [2024-11-19 10:24:15.942996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.943031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.115 [2024-11-19 10:24:15.943047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.943070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.943086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.943109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.115 [2024-11-19 10:24:15.943125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.943486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.115 [2024-11-19 10:24:15.943534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.943565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.943582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.943606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.115 [2024-11-19 10:24:15.943621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.943644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.115 [2024-11-19 10:24:15.943660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.943683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.115 [2024-11-19 10:24:15.943698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.943721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.115 [2024-11-19 10:24:15.943737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.943759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.943775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.943798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.943813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.943853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.943870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.943893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.943909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.943931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.943947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.943970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.115 [2024-11-19 10:24:15.943986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.944019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.115 [2024-11-19 10:24:15.944035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.944067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.944087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.944110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.115 [2024-11-19 10:24:15.944126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.946033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.115 [2024-11-19 10:24:15.946072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:45.115 [2024-11-19 10:24:15.946104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.115 [2024-11-19 10:24:15.946121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.946144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.946160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.946183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.116 [2024-11-19 10:24:15.946200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.946222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.116 [2024-11-19 10:24:15.946238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.946260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.946276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.946299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.116 [2024-11-19 10:24:15.946314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.946337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.116 [2024-11-19 10:24:15.946352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.946375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.116 [2024-11-19 10:24:15.946390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.946413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.946429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.946464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.116 [2024-11-19 10:24:15.946481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.946504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.116 [2024-11-19 10:24:15.946520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.116 [2024-11-19 10:24:15.947160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.116 [2024-11-19 10:24:15.947205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.116 [2024-11-19 10:24:15.947244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.116 [2024-11-19 10:24:15.947283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.116 [2024-11-19 10:24:15.947321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.947360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.947397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.947435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.116 [2024-11-19 10:24:15.947473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.947512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.947574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.947615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.947653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.947691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.947730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.947768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.947806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.947862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.947900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.947938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.947976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.947999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.948014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.948037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.948060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.948084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.948100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.948123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.948138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.948161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.948176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.948199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.116 [2024-11-19 10:24:15.948215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:15.948239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.116 [2024-11-19 10:24:15.948254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:45.116 [2024-11-19 10:24:22.515983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.117 [2024-11-19 10:24:22.516052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.117 [2024-11-19 10:24:22.516139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.117 [2024-11-19 10:24:22.516181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.516221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.516260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.117 [2024-11-19 10:24:22.516300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.516364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.117 [2024-11-19 10:24:22.516407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.516445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.117 [2024-11-19 10:24:22.516483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.516521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.516560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.516599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.516637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.117 [2024-11-19 10:24:22.516677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.516716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.516755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.516795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.516866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.516919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.516959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.516981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.516997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.517020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.517036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.517059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.517075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.517098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.117 [2024-11-19 10:24:22.517114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.517136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.517152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.517558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.117 [2024-11-19 10:24:22.517587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.517618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.117 [2024-11-19 10:24:22.517637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.517663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.517680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.517705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.117 [2024-11-19 10:24:22.517721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.517747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.517762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.517801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.117 [2024-11-19 10:24:22.517836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.517874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.517892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.517919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.517936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.517962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.117 [2024-11-19 10:24:22.517978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.518004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.518020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.518046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.518062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.518088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.518104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.518129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.117 [2024-11-19 10:24:22.518145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.117 [2024-11-19 10:24:22.518171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.518186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.518211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.518227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.518253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.518271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.518296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.518312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.518338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.518363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.518390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.518407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.518433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.518449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.518474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.518490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.518516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.518531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.518557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.518572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.518598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.118 [2024-11-19 10:24:22.518614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.518640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.518655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.518680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.518696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.518722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.118 [2024-11-19 10:24:22.518738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.518763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.118 [2024-11-19 10:24:22.518779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.518804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.518838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.519441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.118 [2024-11-19 10:24:22.519476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.519508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.118 [2024-11-19 10:24:22.519530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.519560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.519576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.519603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.118 [2024-11-19 10:24:22.519619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.519647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.519664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.519691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.519707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.519735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.519751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.519778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.118 [2024-11-19 10:24:22.519794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.519843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.519871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.519901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.519917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.519945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.519961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.519990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.118 [2024-11-19 10:24:22.520006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.520034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.520050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.520089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.520106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.520134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.520151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.520178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.118 [2024-11-19 10:24:22.520194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:45.118 [2024-11-19 10:24:22.520223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.118 [2024-11-19 10:24:22.520239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.520266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:22.520283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.520311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.119 [2024-11-19 10:24:22.520327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.520355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.119 [2024-11-19 10:24:22.520371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.520547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.119 [2024-11-19 10:24:22.520573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.520609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:22.520627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.520659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:22.520675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.520707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:22.520722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.520754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:22.520770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.520813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:22.520848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.520881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:22.520898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.520930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:22.520945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.520977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:22.520993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.521025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:22.521041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.521072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:22.521088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.521119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:22.521135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.521167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:22.521183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:22.521214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:22.521231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.778288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.119 [2024-11-19 10:24:29.778387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.778453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.778477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.778502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.778519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.778547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.778611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.778654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.778683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.778709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.778725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.778747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.778763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.778785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.778801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.778838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.778866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.778900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.778923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.779264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.779306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.779343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.779361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.779384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.779401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.779432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.779447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.779471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.779487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.779525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.779575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.779616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.779640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:45.119 [2024-11-19 10:24:29.779677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.119 [2024-11-19 10:24:29.779696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.779719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.779735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.779759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.120 [2024-11-19 10:24:29.779774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.779798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.779814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.779857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.779887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.779929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.120 [2024-11-19 10:24:29.779960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.780003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.120 [2024-11-19 10:24:29.780020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.780055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.780084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.780112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.120 [2024-11-19 10:24:29.780129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.780153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.120 [2024-11-19 10:24:29.780169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.780193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.120 [2024-11-19 10:24:29.780208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.780244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.120 [2024-11-19 10:24:29.780261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.780285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.780302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.780328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.780357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.780683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.120 [2024-11-19 10:24:29.780710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.780736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.780753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.780788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.120 [2024-11-19 10:24:29.780836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.780882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.120 [2024-11-19 10:24:29.780915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.780952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.780970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.780994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.781011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.120 [2024-11-19 10:24:29.781051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.781092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.781139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.120 [2024-11-19 10:24:29.781214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.781268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.120 [2024-11-19 10:24:29.781333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.781392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.120 [2024-11-19 10:24:29.781449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.781490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.781530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.781570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.781609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.781649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.781726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.781788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.781870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.781943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.781975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.781992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.782016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.782032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:45.120 [2024-11-19 10:24:29.782057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.120 [2024-11-19 10:24:29.782083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.782125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.782155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.782182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.782199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.782224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.782239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.782263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.782279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.782305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.782334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.782363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.782383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.782424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.782455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.782493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.782526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.782557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.782583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.782610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.121 [2024-11-19 10:24:29.782626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.782650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.782667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.782692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.782708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.782734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.782763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.782803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.782840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.782882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.782912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.782953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.782981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.783019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.783038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.783063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.783083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.783108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.783124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.783148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.783173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.783200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.783216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.783556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.121 [2024-11-19 10:24:29.783591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.783642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.121 [2024-11-19 10:24:29.783674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.783718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.783738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.783772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.783802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.783870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.783894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.783924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.783942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.783984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.784005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.784041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.784070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.784110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.784127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.784160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.784188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.784220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.784245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.784298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.784323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.784370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.784392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.784430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.784453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.784484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.784501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.784531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.784546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.784580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.121 [2024-11-19 10:24:29.784608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:45.121 [2024-11-19 10:24:29.784640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.121 [2024-11-19 10:24:29.784659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.784704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:29.784737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.784783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.122 [2024-11-19 10:24:29.784803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.784847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:29.784867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.784897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.122 [2024-11-19 10:24:29.784912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.784942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:29.784961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.785016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:29.785038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.785086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.122 [2024-11-19 10:24:29.785127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.785177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:29.785200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.785230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.122 [2024-11-19 10:24:29.785247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.785277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:29.785293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.785322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:29.785338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.785380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:29.785410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.785460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:29.785491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.785526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.122 [2024-11-19 10:24:29.785546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.785588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.122 [2024-11-19 10:24:29.785607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.785649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:29.785669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.785708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:29.785737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.785785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.122 [2024-11-19 10:24:29.785841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.785876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:29.785893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.785922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:29.785938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.785968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.122 [2024-11-19 10:24:29.785984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:29.786014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.122 [2024-11-19 10:24:29.786040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.397585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.397641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.397672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.397689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.397706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.397720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.397736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.397750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.397766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.397780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.397796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.397810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.397842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.397858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.397874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.397888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.397927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.397943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.397959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.397973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.397989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.398003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.398019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.398033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.398049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.398063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.398079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.398093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.398109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.398123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.398139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.398153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.398170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.122 [2024-11-19 10:24:43.398185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.122 [2024-11-19 10:24:43.398202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.398216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.398245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.398276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.398314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.398346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.398377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.398407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.123 [2024-11-19 10:24:43.398438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.398469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.123 [2024-11-19 10:24:43.398499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.123 [2024-11-19 10:24:43.398529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.398560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.398590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.398620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.123 [2024-11-19 10:24:43.398650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.123 [2024-11-19 10:24:43.398681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.123 [2024-11-19 10:24:43.398719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.123 [2024-11-19 10:24:43.398749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.398787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.398833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.398867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.398898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.123 [2024-11-19 10:24:43.398928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.123 [2024-11-19 10:24:43.398959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.398975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.123 [2024-11-19 10:24:43.398989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.399006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.399032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.399049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.123 [2024-11-19 10:24:43.399067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.399084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.399098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.399114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.123 [2024-11-19 10:24:43.399136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.399154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.123 [2024-11-19 10:24:43.399168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.399184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.399199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.399215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.399230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.399246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.399260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.399276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.399290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.399306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.399320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.399336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.399350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.399366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.399380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.123 [2024-11-19 10:24:43.399396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.123 [2024-11-19 10:24:43.399410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.399441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.124 [2024-11-19 10:24:43.399480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.124 [2024-11-19 10:24:43.399510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.124 [2024-11-19 10:24:43.399588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.399621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.399652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.399682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.399712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.399742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.399773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.399803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.399847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.399878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.399908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.399938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.124 [2024-11-19 10:24:43.399979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.399997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.400011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.400042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.124 [2024-11-19 10:24:43.400083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.124 [2024-11-19 10:24:43.400112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.124 [2024-11-19 10:24:43.400143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.124 [2024-11-19 10:24:43.400174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.400204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.400234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.400265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.124 [2024-11-19 10:24:43.400295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.124 [2024-11-19 10:24:43.400324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.400355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.400393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.124 [2024-11-19 10:24:43.400423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.400453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.400483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.400513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.124 [2024-11-19 10:24:43.400543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.124 [2024-11-19 10:24:43.400573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.124 [2024-11-19 10:24:43.400590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.125 [2024-11-19 10:24:43.400604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.400620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.400634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.400659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.125 [2024-11-19 10:24:43.400672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.400688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.400702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.400718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.125 [2024-11-19 10:24:43.400732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.400748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.400762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.400785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.400800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.400817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.400843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.400860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.400874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.400890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.400904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.400921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.400935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.400951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.400968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.400984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:127376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.400999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.401029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.401059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.125 [2024-11-19 10:24:43.401090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.125 [2024-11-19 10:24:43.401120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.125 [2024-11-19 10:24:43.401150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.125 [2024-11-19 10:24:43.401196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.401228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.401261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.125 [2024-11-19 10:24:43.401291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.401321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.401352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.125 [2024-11-19 10:24:43.401382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.401412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.125 [2024-11-19 10:24:43.401442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.401474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.125 [2024-11-19 10:24:43.401504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:45.125 [2024-11-19 10:24:43.401534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.401564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.401603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.401634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.401664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.401695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.401725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:45.125 [2024-11-19 10:24:43.401757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2454120 is same with the state(5) to be set 00:24:45.125 [2024-11-19 10:24:43.401792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:45.125 [2024-11-19 10:24:43.401804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:45.125 [2024-11-19 10:24:43.401815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127552 len:8 PRP1 0x0 PRP2 0x0 00:24:45.125 [2024-11-19 10:24:43.401841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:45.125 [2024-11-19 10:24:43.401895] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2454120 was disconnected and freed. reset controller. 00:24:45.125 [2024-11-19 10:24:43.403226] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.126 [2024-11-19 10:24:43.403321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2464b60 (9): Bad file descriptor 00:24:45.126 [2024-11-19 10:24:43.403455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-11-19 10:24:43.403515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.126 [2024-11-19 10:24:43.403539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2464b60 with addr=10.0.0.2, port=4421 00:24:45.126 [2024-11-19 10:24:43.403556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464b60 is same with the state(5) to be set 00:24:45.126 [2024-11-19 10:24:43.403582] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2464b60 (9): Bad file descriptor 00:24:45.126 [2024-11-19 10:24:43.403609] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.126 [2024-11-19 10:24:43.403625] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.126 [2024-11-19 10:24:43.403652] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.126 [2024-11-19 10:24:43.403679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.126 [2024-11-19 10:24:43.403695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.126 [2024-11-19 10:24:53.461898] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:45.126 Received shutdown signal, test time was about 56.290918 seconds 00:24:45.126 00:24:45.126 Latency(us) 00:24:45.126 [2024-11-19T10:25:04.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.126 [2024-11-19T10:25:04.672Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:45.126 Verification LBA range: start 0x0 length 0x4000 00:24:45.126 Nvme0n1 : 56.29 10433.35 40.76 0.00 0.00 12248.76 577.16 7046430.72 00:24:45.126 [2024-11-19T10:25:04.672Z] =================================================================================================================== 00:24:45.126 [2024-11-19T10:25:04.672Z] Total : 10433.35 40.76 0.00 0.00 12248.76 577.16 7046430.72 00:24:45.126 10:25:03 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:45.126 10:25:04 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:45.126 10:25:04 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:45.126 10:25:04 -- host/multipath.sh@125 -- # nvmftestfini 00:24:45.126 10:25:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:45.126 10:25:04 -- nvmf/common.sh@116 -- # sync 00:24:45.126 10:25:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:45.126 10:25:04 -- nvmf/common.sh@119 -- # set +e 00:24:45.126 10:25:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:45.126 10:25:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:45.126 rmmod nvme_tcp 00:24:45.126 rmmod nvme_fabrics 00:24:45.126 rmmod nvme_keyring 00:24:45.126 10:25:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:45.126 10:25:04 -- nvmf/common.sh@123 -- # set -e 00:24:45.126 10:25:04 -- nvmf/common.sh@124 -- # return 0 00:24:45.126 10:25:04 -- nvmf/common.sh@477 -- # '[' -n 98275 ']' 00:24:45.126 10:25:04 -- nvmf/common.sh@478 -- # killprocess 98275 00:24:45.126 10:25:04 -- common/autotest_common.sh@936 -- # '[' -z 98275 ']' 00:24:45.126 10:25:04 -- common/autotest_common.sh@940 -- # kill -0 98275 00:24:45.126 10:25:04 -- common/autotest_common.sh@941 -- # uname 00:24:45.126 10:25:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:45.126 10:25:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 98275 00:24:45.126 10:25:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:45.126 10:25:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:45.126 killing process with pid 98275 00:24:45.126 10:25:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 98275' 00:24:45.126 10:25:04 -- common/autotest_common.sh@955 -- # kill 98275 00:24:45.126 10:25:04 -- common/autotest_common.sh@960 -- # wait 98275 00:24:45.126 10:25:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:45.126 10:25:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:45.126 10:25:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:45.126 10:25:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:45.126 10:25:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:45.126 10:25:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.126 10:25:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.126 10:25:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.126 10:25:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:45.126 00:24:45.126 real 1m1.892s 00:24:45.126 user 2m55.601s 00:24:45.126 sys 0m13.784s 00:24:45.126 10:25:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:45.126 10:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:45.126 ************************************ 00:24:45.126 END TEST nvmf_multipath 00:24:45.126 ************************************ 00:24:45.126 10:25:04 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:45.126 10:25:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:45.126 10:25:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:45.126 10:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:45.126 ************************************ 00:24:45.126 START TEST nvmf_timeout 00:24:45.126 ************************************ 00:24:45.126 10:25:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:45.386 * Looking for test storage... 00:24:45.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:45.386 10:25:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:45.386 10:25:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:45.386 10:25:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:45.386 10:25:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:45.386 10:25:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:45.386 10:25:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:45.386 10:25:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:45.386 10:25:04 -- scripts/common.sh@335 -- # IFS=.-: 00:24:45.386 10:25:04 -- scripts/common.sh@335 -- # read -ra ver1 00:24:45.386 10:25:04 -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.386 10:25:04 -- scripts/common.sh@336 -- # read -ra ver2 00:24:45.386 10:25:04 -- scripts/common.sh@337 -- # local 'op=<' 00:24:45.386 10:25:04 -- scripts/common.sh@339 -- # ver1_l=2 00:24:45.386 10:25:04 -- scripts/common.sh@340 -- # ver2_l=1 00:24:45.386 10:25:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:45.386 10:25:04 -- scripts/common.sh@343 -- # case "$op" in 00:24:45.386 10:25:04 -- scripts/common.sh@344 -- # : 1 00:24:45.386 10:25:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:45.386 10:25:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.386 10:25:04 -- scripts/common.sh@364 -- # decimal 1 00:24:45.386 10:25:04 -- scripts/common.sh@352 -- # local d=1 00:24:45.386 10:25:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.386 10:25:04 -- scripts/common.sh@354 -- # echo 1 00:24:45.386 10:25:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:45.386 10:25:04 -- scripts/common.sh@365 -- # decimal 2 00:24:45.386 10:25:04 -- scripts/common.sh@352 -- # local d=2 00:24:45.386 10:25:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.386 10:25:04 -- scripts/common.sh@354 -- # echo 2 00:24:45.386 10:25:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:45.386 10:25:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:45.386 10:25:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:45.386 10:25:04 -- scripts/common.sh@367 -- # return 0 00:24:45.386 10:25:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.386 10:25:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:45.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.386 --rc genhtml_branch_coverage=1 00:24:45.386 --rc genhtml_function_coverage=1 00:24:45.386 --rc genhtml_legend=1 00:24:45.386 --rc geninfo_all_blocks=1 00:24:45.386 --rc geninfo_unexecuted_blocks=1 00:24:45.386 00:24:45.386 ' 00:24:45.386 10:25:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:45.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.386 --rc genhtml_branch_coverage=1 00:24:45.386 --rc genhtml_function_coverage=1 00:24:45.386 --rc genhtml_legend=1 00:24:45.386 --rc geninfo_all_blocks=1 00:24:45.386 --rc geninfo_unexecuted_blocks=1 00:24:45.386 00:24:45.386 ' 00:24:45.386 10:25:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:45.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.386 --rc genhtml_branch_coverage=1 00:24:45.386 --rc genhtml_function_coverage=1 00:24:45.386 --rc genhtml_legend=1 00:24:45.386 --rc geninfo_all_blocks=1 00:24:45.386 --rc geninfo_unexecuted_blocks=1 00:24:45.386 00:24:45.386 ' 00:24:45.386 10:25:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:45.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.386 --rc genhtml_branch_coverage=1 00:24:45.386 --rc genhtml_function_coverage=1 00:24:45.386 --rc genhtml_legend=1 00:24:45.386 --rc geninfo_all_blocks=1 00:24:45.386 --rc geninfo_unexecuted_blocks=1 00:24:45.386 00:24:45.386 ' 00:24:45.386 10:25:04 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:45.386 10:25:04 -- nvmf/common.sh@7 -- # uname -s 00:24:45.386 10:25:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.386 10:25:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.386 10:25:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.386 10:25:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.386 10:25:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.386 10:25:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.386 10:25:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.386 10:25:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.386 10:25:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.386 10:25:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.386 10:25:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:24:45.386 10:25:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:24:45.386 10:25:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.386 10:25:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.386 10:25:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:45.386 10:25:04 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:45.386 10:25:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.386 10:25:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.386 10:25:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.386 10:25:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.386 10:25:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.387 10:25:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.387 10:25:04 -- paths/export.sh@5 -- # export PATH 00:24:45.387 10:25:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.387 10:25:04 -- nvmf/common.sh@46 -- # : 0 00:24:45.387 10:25:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:45.387 10:25:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:45.387 10:25:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:45.387 10:25:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.387 10:25:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.387 10:25:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:45.387 10:25:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:45.387 10:25:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:45.387 10:25:04 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:45.387 10:25:04 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:45.387 10:25:04 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:45.387 10:25:04 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:45.387 10:25:04 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:45.387 10:25:04 -- host/timeout.sh@19 -- # nvmftestinit 00:24:45.387 10:25:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:45.387 10:25:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.387 10:25:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:45.387 10:25:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:45.387 10:25:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:45.387 10:25:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.387 10:25:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.387 10:25:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.387 10:25:04 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:45.387 10:25:04 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:45.387 10:25:04 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:45.387 10:25:04 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:45.387 10:25:04 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:45.387 10:25:04 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:45.387 10:25:04 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.387 10:25:04 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.387 10:25:04 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:45.387 10:25:04 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:45.387 10:25:04 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:45.387 10:25:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:45.387 10:25:04 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:45.387 10:25:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.387 10:25:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:45.387 10:25:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:45.387 10:25:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:45.387 10:25:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:45.387 10:25:04 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:45.387 10:25:04 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:45.387 Cannot find device "nvmf_tgt_br" 00:24:45.387 10:25:04 -- nvmf/common.sh@154 -- # true 00:24:45.387 10:25:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:45.387 Cannot find device "nvmf_tgt_br2" 00:24:45.387 10:25:04 -- nvmf/common.sh@155 -- # true 00:24:45.387 10:25:04 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:45.387 10:25:04 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:45.387 Cannot find device "nvmf_tgt_br" 00:24:45.387 10:25:04 -- nvmf/common.sh@157 -- # true 00:24:45.387 10:25:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:45.387 Cannot find device "nvmf_tgt_br2" 00:24:45.387 10:25:04 -- nvmf/common.sh@158 -- # true 00:24:45.387 10:25:04 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:45.387 10:25:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:45.387 10:25:04 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:45.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:45.387 10:25:04 -- nvmf/common.sh@161 -- # true 00:24:45.387 10:25:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:45.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:45.387 10:25:04 -- nvmf/common.sh@162 -- # true 00:24:45.387 10:25:04 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:45.387 10:25:04 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:45.645 10:25:04 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:45.645 10:25:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:45.645 10:25:04 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:45.645 10:25:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:45.645 10:25:04 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:45.645 10:25:04 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:45.645 10:25:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:45.645 10:25:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:45.645 10:25:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:45.645 10:25:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:45.645 10:25:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:45.645 10:25:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:45.645 10:25:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:45.645 10:25:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:45.645 10:25:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:45.645 10:25:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:45.645 10:25:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:45.645 10:25:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:45.645 10:25:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:45.645 10:25:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:45.645 10:25:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:45.645 10:25:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:45.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:24:45.646 00:24:45.646 --- 10.0.0.2 ping statistics --- 00:24:45.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.646 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:24:45.646 10:25:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:45.646 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:45.646 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:24:45.646 00:24:45.646 --- 10.0.0.3 ping statistics --- 00:24:45.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.646 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:24:45.646 10:25:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:45.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:24:45.646 00:24:45.646 --- 10.0.0.1 ping statistics --- 00:24:45.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.646 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:24:45.646 10:25:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.646 10:25:05 -- nvmf/common.sh@421 -- # return 0 00:24:45.646 10:25:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:45.646 10:25:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.646 10:25:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:45.646 10:25:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:45.646 10:25:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.646 10:25:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:45.646 10:25:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:45.646 10:25:05 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:45.646 10:25:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:45.646 10:25:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:45.646 10:25:05 -- common/autotest_common.sh@10 -- # set +x 00:24:45.646 10:25:05 -- nvmf/common.sh@469 -- # nvmfpid=99641 00:24:45.646 10:25:05 -- nvmf/common.sh@470 -- # waitforlisten 99641 00:24:45.646 10:25:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:45.646 10:25:05 -- common/autotest_common.sh@829 -- # '[' -z 99641 ']' 00:24:45.646 10:25:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.646 10:25:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.646 10:25:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.646 10:25:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.646 10:25:05 -- common/autotest_common.sh@10 -- # set +x 00:24:45.904 [2024-11-19 10:25:05.208596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:45.904 [2024-11-19 10:25:05.208696] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.904 [2024-11-19 10:25:05.342477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:45.904 [2024-11-19 10:25:05.375729] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:45.904 [2024-11-19 10:25:05.375887] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.904 [2024-11-19 10:25:05.375902] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.904 [2024-11-19 10:25:05.375911] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.904 [2024-11-19 10:25:05.376026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.904 [2024-11-19 10:25:05.376036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.839 10:25:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.839 10:25:06 -- common/autotest_common.sh@862 -- # return 0 00:24:46.839 10:25:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:46.839 10:25:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:46.839 10:25:06 -- common/autotest_common.sh@10 -- # set +x 00:24:46.839 10:25:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.839 10:25:06 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:46.839 10:25:06 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:47.098 [2024-11-19 10:25:06.548528] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.098 10:25:06 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:47.357 Malloc0 00:24:47.615 10:25:06 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:47.873 10:25:07 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:48.131 10:25:07 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.389 [2024-11-19 10:25:07.689794] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.389 10:25:07 -- host/timeout.sh@32 -- # bdevperf_pid=99738 00:24:48.389 10:25:07 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:48.389 10:25:07 -- host/timeout.sh@34 -- # waitforlisten 99738 /var/tmp/bdevperf.sock 00:24:48.389 10:25:07 -- common/autotest_common.sh@829 -- # '[' -z 99738 ']' 00:24:48.389 10:25:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.389 10:25:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:48.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.389 10:25:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.389 10:25:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:48.389 10:25:07 -- common/autotest_common.sh@10 -- # set +x 00:24:48.389 [2024-11-19 10:25:07.760872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:48.389 [2024-11-19 10:25:07.760963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99738 ] 00:24:48.389 [2024-11-19 10:25:07.902768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.647 [2024-11-19 10:25:07.941110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.583 10:25:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:49.583 10:25:08 -- common/autotest_common.sh@862 -- # return 0 00:24:49.583 10:25:08 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:49.583 10:25:09 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:50.149 NVMe0n1 00:24:50.149 10:25:09 -- host/timeout.sh@51 -- # rpc_pid=99786 00:24:50.149 10:25:09 -- host/timeout.sh@53 -- # sleep 1 00:24:50.149 10:25:09 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:50.149 Running I/O for 10 seconds... 00:24:51.083 10:25:10 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.345 [2024-11-19 10:25:10.746449] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746515] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746527] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746536] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746544] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746553] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746569] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746594] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746602] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746619] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746635] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746643] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746651] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746659] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746667] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.345 [2024-11-19 10:25:10.746675] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746684] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746692] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746700] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746709] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746724] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746733] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746750] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746758] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746775] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746783] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746811] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746833] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746851] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746869] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746877] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746885] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746901] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746909] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746926] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746934] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746950] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746959] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746967] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746983] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.746993] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.747001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.747009] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.747031] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.747040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.747048] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.747056] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.747065] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.747073] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.747081] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.747089] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.747098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x873a60 is same with the state(5) to be set 00:24:51.346 [2024-11-19 10:25:10.747303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.346 [2024-11-19 10:25:10.747337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.346 [2024-11-19 10:25:10.747372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.346 [2024-11-19 10:25:10.747382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.346 [2024-11-19 10:25:10.747395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.346 [2024-11-19 10:25:10.747405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.346 [2024-11-19 10:25:10.747417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.346 [2024-11-19 10:25:10.747426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.346 [2024-11-19 10:25:10.747438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.346 [2024-11-19 10:25:10.747447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.346 [2024-11-19 10:25:10.747459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.346 [2024-11-19 10:25:10.747468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.346 [2024-11-19 10:25:10.747481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.346 [2024-11-19 10:25:10.747490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.346 [2024-11-19 10:25:10.747504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.346 [2024-11-19 10:25:10.747513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.346 [2024-11-19 10:25:10.747525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:123048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.346 [2024-11-19 10:25:10.747534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.346 [2024-11-19 10:25:10.747546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.346 [2024-11-19 10:25:10.747556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.346 [2024-11-19 10:25:10.747567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.346 [2024-11-19 10:25:10.747577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.346 [2024-11-19 10:25:10.747589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.346 [2024-11-19 10:25:10.747598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.346 [2024-11-19 10:25:10.747610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.346 [2024-11-19 10:25:10.747619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.747641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.747662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.747683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.747707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.747728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.747750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.747772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.747793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.747814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.747851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.747873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.747894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.747915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.747936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.747966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.747987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.747998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.748008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.748030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.748051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.347 [2024-11-19 10:25:10.748074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.748095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.347 [2024-11-19 10:25:10.748116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.748138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.347 [2024-11-19 10:25:10.748158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.347 [2024-11-19 10:25:10.748179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.748201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.347 [2024-11-19 10:25:10.748222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.347 [2024-11-19 10:25:10.748243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.748264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.748285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.748306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.748328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.748353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.748375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.347 [2024-11-19 10:25:10.748396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.347 [2024-11-19 10:25:10.748409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.348 [2024-11-19 10:25:10.748441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.348 [2024-11-19 10:25:10.748462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.348 [2024-11-19 10:25:10.748524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.348 [2024-11-19 10:25:10.748587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.348 [2024-11-19 10:25:10.748833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.348 [2024-11-19 10:25:10.748898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.348 [2024-11-19 10:25:10.748919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.748940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.348 [2024-11-19 10:25:10.748961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.348 [2024-11-19 10:25:10.748981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.748993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.749002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.749013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.749023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.749035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.749048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.749060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.749069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.749081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.749091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.749102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.749112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.749123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.749132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.749145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.749154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.749166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.749176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.749188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.749197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.348 [2024-11-19 10:25:10.749209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.348 [2024-11-19 10:25:10.749218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.349 [2024-11-19 10:25:10.749432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.349 [2024-11-19 10:25:10.749452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.349 [2024-11-19 10:25:10.749497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.349 [2024-11-19 10:25:10.749538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.349 [2024-11-19 10:25:10.749559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.349 [2024-11-19 10:25:10.749580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.349 [2024-11-19 10:25:10.749600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.349 [2024-11-19 10:25:10.749642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.349 [2024-11-19 10:25:10.749663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.349 [2024-11-19 10:25:10.749685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.349 [2024-11-19 10:25:10.749771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.349 [2024-11-19 10:25:10.749792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.349 [2024-11-19 10:25:10.749846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.349 [2024-11-19 10:25:10.749867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:124296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.349 [2024-11-19 10:25:10.749938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:51.349 [2024-11-19 10:25:10.749959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.749970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.349 [2024-11-19 10:25:10.749979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.349 [2024-11-19 10:25:10.750012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.350 [2024-11-19 10:25:10.750022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.350 [2024-11-19 10:25:10.750033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.350 [2024-11-19 10:25:10.750043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.350 [2024-11-19 10:25:10.750054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.350 [2024-11-19 10:25:10.750063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.350 [2024-11-19 10:25:10.750074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.350 [2024-11-19 10:25:10.750083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.350 [2024-11-19 10:25:10.750095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.350 [2024-11-19 10:25:10.750106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.350 [2024-11-19 10:25:10.750117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.350 [2024-11-19 10:25:10.750127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.350 [2024-11-19 10:25:10.750138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21acb80 is same with the state(5) to be set 00:24:51.350 [2024-11-19 10:25:10.750152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:51.350 [2024-11-19 10:25:10.750160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:51.350 [2024-11-19 10:25:10.750169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123856 len:8 PRP1 0x0 PRP2 0x0 00:24:51.350 [2024-11-19 10:25:10.750178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.350 [2024-11-19 10:25:10.750234] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21acb80 was disconnected and freed. reset controller. 00:24:51.350 [2024-11-19 10:25:10.750491] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.350 [2024-11-19 10:25:10.750589] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b250 (9): Bad file descriptor 00:24:51.350 [2024-11-19 10:25:10.750732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.350 [2024-11-19 10:25:10.750781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:51.350 [2024-11-19 10:25:10.750798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b250 with addr=10.0.0.2, port=4420 00:24:51.350 [2024-11-19 10:25:10.750809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b250 is same with the state(5) to be set 00:24:51.350 [2024-11-19 10:25:10.750844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b250 (9): Bad file descriptor 00:24:51.350 [2024-11-19 10:25:10.750863] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:51.350 [2024-11-19 10:25:10.750873] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:51.350 [2024-11-19 10:25:10.750884] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:51.350 [2024-11-19 10:25:10.750905] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:51.350 [2024-11-19 10:25:10.750916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:51.350 10:25:10 -- host/timeout.sh@56 -- # sleep 2 00:24:53.259 [2024-11-19 10:25:12.751081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.259 [2024-11-19 10:25:12.751185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:53.259 [2024-11-19 10:25:12.751206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b250 with addr=10.0.0.2, port=4420 00:24:53.259 [2024-11-19 10:25:12.751220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b250 is same with the state(5) to be set 00:24:53.260 [2024-11-19 10:25:12.751249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b250 (9): Bad file descriptor 00:24:53.260 [2024-11-19 10:25:12.751269] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:53.260 [2024-11-19 10:25:12.751280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:53.260 [2024-11-19 10:25:12.751291] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:53.260 [2024-11-19 10:25:12.751319] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:53.260 [2024-11-19 10:25:12.751331] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:53.260 10:25:12 -- host/timeout.sh@57 -- # get_controller 00:24:53.260 10:25:12 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:53.260 10:25:12 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:53.828 10:25:13 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:53.828 10:25:13 -- host/timeout.sh@58 -- # get_bdev 00:24:53.828 10:25:13 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:53.828 10:25:13 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:54.086 10:25:13 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:54.086 10:25:13 -- host/timeout.sh@61 -- # sleep 5 00:24:55.472 [2024-11-19 10:25:14.751502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.472 [2024-11-19 10:25:14.751789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.472 [2024-11-19 10:25:14.751833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x217b250 with addr=10.0.0.2, port=4420 00:24:55.472 [2024-11-19 10:25:14.751851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b250 is same with the state(5) to be set 00:24:55.472 [2024-11-19 10:25:14.751884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x217b250 (9): Bad file descriptor 00:24:55.472 [2024-11-19 10:25:14.751905] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:55.472 [2024-11-19 10:25:14.751915] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:55.472 [2024-11-19 10:25:14.751927] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.472 [2024-11-19 10:25:14.751955] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.472 [2024-11-19 10:25:14.751966] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:57.374 [2024-11-19 10:25:16.752006] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:57.374 [2024-11-19 10:25:16.752094] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:57.374 [2024-11-19 10:25:16.752108] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:57.374 [2024-11-19 10:25:16.752119] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:57.374 [2024-11-19 10:25:16.752149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.310 00:24:58.310 Latency(us) 00:24:58.310 [2024-11-19T10:25:17.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.310 [2024-11-19T10:25:17.856Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:58.310 Verification LBA range: start 0x0 length 0x4000 00:24:58.310 NVMe0n1 : 8.19 1882.57 7.35 15.64 0.00 67332.11 2949.12 7015926.69 00:24:58.310 [2024-11-19T10:25:17.856Z] =================================================================================================================== 00:24:58.310 [2024-11-19T10:25:17.856Z] Total : 1882.57 7.35 15.64 0.00 67332.11 2949.12 7015926.69 00:24:58.310 0 00:24:59.246 10:25:18 -- host/timeout.sh@62 -- # get_controller 00:24:59.246 10:25:18 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:59.246 10:25:18 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:59.246 10:25:18 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:59.246 10:25:18 -- host/timeout.sh@63 -- # get_bdev 00:24:59.246 10:25:18 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:59.246 10:25:18 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:59.505 10:25:18 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:59.505 10:25:18 -- host/timeout.sh@65 -- # wait 99786 00:24:59.505 10:25:18 -- host/timeout.sh@67 -- # killprocess 99738 00:24:59.505 10:25:18 -- common/autotest_common.sh@936 -- # '[' -z 99738 ']' 00:24:59.505 10:25:18 -- common/autotest_common.sh@940 -- # kill -0 99738 00:24:59.505 10:25:18 -- common/autotest_common.sh@941 -- # uname 00:24:59.505 10:25:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:59.505 10:25:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99738 00:24:59.505 killing process with pid 99738 00:24:59.505 Received shutdown signal, test time was about 9.461904 seconds 00:24:59.505 00:24:59.505 Latency(us) 00:24:59.505 [2024-11-19T10:25:19.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.505 [2024-11-19T10:25:19.051Z] =================================================================================================================== 00:24:59.505 [2024-11-19T10:25:19.051Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:59.505 10:25:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:59.505 10:25:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:59.505 10:25:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99738' 00:24:59.505 10:25:19 -- common/autotest_common.sh@955 -- # kill 99738 00:24:59.505 10:25:19 -- common/autotest_common.sh@960 -- # wait 99738 00:24:59.764 10:25:19 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.023 [2024-11-19 10:25:19.419099] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.023 10:25:19 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:00.023 10:25:19 -- host/timeout.sh@74 -- # bdevperf_pid=99939 00:25:00.023 10:25:19 -- host/timeout.sh@76 -- # waitforlisten 99939 /var/tmp/bdevperf.sock 00:25:00.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:00.023 10:25:19 -- common/autotest_common.sh@829 -- # '[' -z 99939 ']' 00:25:00.023 10:25:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:00.023 10:25:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.023 10:25:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:00.023 10:25:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.023 10:25:19 -- common/autotest_common.sh@10 -- # set +x 00:25:00.023 [2024-11-19 10:25:19.493259] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:00.023 [2024-11-19 10:25:19.493731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99939 ] 00:25:00.282 [2024-11-19 10:25:19.631735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.282 [2024-11-19 10:25:19.669699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.540 10:25:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:00.540 10:25:19 -- common/autotest_common.sh@862 -- # return 0 00:25:00.540 10:25:19 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:00.798 10:25:20 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:25:01.059 NVMe0n1 00:25:01.059 10:25:20 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:01.059 10:25:20 -- host/timeout.sh@84 -- # rpc_pid=99973 00:25:01.059 10:25:20 -- host/timeout.sh@86 -- # sleep 1 00:25:01.318 Running I/O for 10 seconds... 00:25:02.253 10:25:21 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.515 [2024-11-19 10:25:21.823409] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823473] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823502] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823527] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823535] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823560] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823568] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823576] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823601] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823617] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823625] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823633] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823641] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823665] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823681] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823705] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823764] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823772] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823812] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823835] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.515 [2024-11-19 10:25:21.823852] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823868] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823876] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823884] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823892] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823900] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823916] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823924] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823932] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823940] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823948] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823956] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823972] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823988] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.823996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.824005] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.824013] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.824032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.824040] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x877d90 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.824269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.516 [2024-11-19 10:25:21.824319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.824347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.516 [2024-11-19 10:25:21.824368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.824388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.516 [2024-11-19 10:25:21.824407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.824426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.516 [2024-11-19 10:25:21.824443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.824462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df090 is same with the state(5) to be set 00:25:02.516 [2024-11-19 10:25:21.824598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.824630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.824665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.824687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.824710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.824729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.824752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.824772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.824795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.824813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.824861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.824882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.824905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.824923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.824945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.824963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.824985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:125344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.825005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.825028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:125360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.825045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.825067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.825085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.825119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.825139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.825166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.825186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.825208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:125408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.825226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.825248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.825269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.825293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.825312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.825335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.825356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.825376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.825395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.825416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.825434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.825455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.825476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.825500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.825520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.825542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.825560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.825582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.825600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.516 [2024-11-19 10:25:21.825622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.516 [2024-11-19 10:25:21.825640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.825663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.825682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.825705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.825723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.825747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.825764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.825785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.825804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.825848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.517 [2024-11-19 10:25:21.825871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.825894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:126032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.517 [2024-11-19 10:25:21.825913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.825937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.825956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.825980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.825997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:126056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.517 [2024-11-19 10:25:21.826037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.517 [2024-11-19 10:25:21.826080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.826122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.517 [2024-11-19 10:25:21.826166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.517 [2024-11-19 10:25:21.826202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.826233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:126104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.517 [2024-11-19 10:25:21.826264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.826301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:126120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.517 [2024-11-19 10:25:21.826343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.517 [2024-11-19 10:25:21.826384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.826424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.517 [2024-11-19 10:25:21.826474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.826515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.826557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.826599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.517 [2024-11-19 10:25:21.826640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:126184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.517 [2024-11-19 10:25:21.826683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.517 [2024-11-19 10:25:21.826723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.826762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.826804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.517 [2024-11-19 10:25:21.826874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.517 [2024-11-19 10:25:21.826916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.826960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.826981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.827000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.827037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.827059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.827084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.827103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.827125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.827144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.827167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.827191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.827213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.827231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.827254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.827275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.827298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.517 [2024-11-19 10:25:21.827317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.517 [2024-11-19 10:25:21.827348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.518 [2024-11-19 10:25:21.827369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.827392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.827412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.827433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.827452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.827472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.827490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.827512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.827532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.827555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.827573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.827597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.827617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.827639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.827657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.827678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.518 [2024-11-19 10:25:21.827697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.827721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.827741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.827764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.518 [2024-11-19 10:25:21.827785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.827808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.827846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.827873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.827898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.827923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.827942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.827966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.827987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.518 [2024-11-19 10:25:21.828440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.518 [2024-11-19 10:25:21.828522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.518 [2024-11-19 10:25:21.828571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.518 [2024-11-19 10:25:21.828702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.518 [2024-11-19 10:25:21.828782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.518 [2024-11-19 10:25:21.828843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.828965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.828987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.829006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.518 [2024-11-19 10:25:21.829029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.518 [2024-11-19 10:25:21.829048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.519 [2024-11-19 10:25:21.829214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.519 [2024-11-19 10:25:21.829263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.519 [2024-11-19 10:25:21.829304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.519 [2024-11-19 10:25:21.829436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:02.519 [2024-11-19 10:25:21.829695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.829972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.829995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.830015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.830038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.519 [2024-11-19 10:25:21.830058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.830104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:02.519 [2024-11-19 10:25:21.830124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:02.519 [2024-11-19 10:25:21.830140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126016 len:8 PRP1 0x0 PRP2 0x0 00:25:02.519 [2024-11-19 10:25:21.830160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.519 [2024-11-19 10:25:21.830232] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22109c0 was disconnected and freed. reset controller. 00:25:02.519 [2024-11-19 10:25:21.830578] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.519 [2024-11-19 10:25:21.830634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21df090 (9): Bad file descriptor 00:25:02.519 [2024-11-19 10:25:21.830795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.519 [2024-11-19 10:25:21.830913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.519 [2024-11-19 10:25:21.830965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21df090 with addr=10.0.0.2, port=4420 00:25:02.519 [2024-11-19 10:25:21.830989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df090 is same with the state(5) to be set 00:25:02.519 [2024-11-19 10:25:21.831037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21df090 (9): Bad file descriptor 00:25:02.519 [2024-11-19 10:25:21.831069] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.519 [2024-11-19 10:25:21.831088] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.519 [2024-11-19 10:25:21.831106] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.519 [2024-11-19 10:25:21.831143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.519 [2024-11-19 10:25:21.831175] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.519 10:25:21 -- host/timeout.sh@90 -- # sleep 1 00:25:03.456 [2024-11-19 10:25:22.831343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.456 [2024-11-19 10:25:22.831475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.456 [2024-11-19 10:25:22.831505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21df090 with addr=10.0.0.2, port=4420 00:25:03.456 [2024-11-19 10:25:22.831528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df090 is same with the state(5) to be set 00:25:03.456 [2024-11-19 10:25:22.831570] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21df090 (9): Bad file descriptor 00:25:03.456 [2024-11-19 10:25:22.831602] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:03.456 [2024-11-19 10:25:22.831619] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:03.456 [2024-11-19 10:25:22.831638] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:03.456 [2024-11-19 10:25:22.831680] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:03.456 [2024-11-19 10:25:22.831702] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:03.456 10:25:22 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.715 [2024-11-19 10:25:23.173367] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.715 10:25:23 -- host/timeout.sh@92 -- # wait 99973 00:25:04.654 [2024-11-19 10:25:23.846476] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.216 00:25:11.216 Latency(us) 00:25:11.216 [2024-11-19T10:25:30.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.216 [2024-11-19T10:25:30.762Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:11.216 Verification LBA range: start 0x0 length 0x4000 00:25:11.216 NVMe0n1 : 10.01 9157.72 35.77 0.00 0.00 13950.02 1541.59 3019898.88 00:25:11.216 [2024-11-19T10:25:30.762Z] =================================================================================================================== 00:25:11.216 [2024-11-19T10:25:30.762Z] Total : 9157.72 35.77 0.00 0.00 13950.02 1541.59 3019898.88 00:25:11.216 0 00:25:11.216 10:25:30 -- host/timeout.sh@97 -- # rpc_pid=100094 00:25:11.216 10:25:30 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:11.216 10:25:30 -- host/timeout.sh@98 -- # sleep 1 00:25:11.474 Running I/O for 10 seconds... 00:25:12.409 10:25:31 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.669 [2024-11-19 10:25:32.023773] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.669 [2024-11-19 10:25:32.023866] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.669 [2024-11-19 10:25:32.023887] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.669 [2024-11-19 10:25:32.023903] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.669 [2024-11-19 10:25:32.023918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.669 [2024-11-19 10:25:32.023934] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.669 [2024-11-19 10:25:32.023950] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.669 [2024-11-19 10:25:32.023965] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.669 [2024-11-19 10:25:32.023981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.669 [2024-11-19 10:25:32.023996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.669 [2024-11-19 10:25:32.024011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.669 [2024-11-19 10:25:32.024026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.669 [2024-11-19 10:25:32.024042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.669 [2024-11-19 10:25:32.024057] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024072] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024087] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024102] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024132] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024147] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024177] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024192] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024208] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024223] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024237] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024253] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024288] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024336] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024367] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024383] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024413] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024428] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024444] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024475] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024491] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024507] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024553] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024568] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024583] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024598] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024613] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024629] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024643] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024658] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024673] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024707] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024722] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024770] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024839] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.024859] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d19c0 is same with the state(5) to be set 00:25:12.670 [2024-11-19 10:25:32.025123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.670 [2024-11-19 10:25:32.025160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.670 [2024-11-19 10:25:32.025185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.670 [2024-11-19 10:25:32.025196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.670 [2024-11-19 10:25:32.025208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.670 [2024-11-19 10:25:32.025217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.670 [2024-11-19 10:25:32.025229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.670 [2024-11-19 10:25:32.025239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.670 [2024-11-19 10:25:32.025250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.670 [2024-11-19 10:25:32.025259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.670 [2024-11-19 10:25:32.025270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.670 [2024-11-19 10:25:32.025279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.670 [2024-11-19 10:25:32.025290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.670 [2024-11-19 10:25:32.025299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.670 [2024-11-19 10:25:32.025310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.670 [2024-11-19 10:25:32.025320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.670 [2024-11-19 10:25:32.025331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.670 [2024-11-19 10:25:32.025352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.670 [2024-11-19 10:25:32.025363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.670 [2024-11-19 10:25:32.025372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.670 [2024-11-19 10:25:32.025383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.670 [2024-11-19 10:25:32.025392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.670 [2024-11-19 10:25:32.025403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.670 [2024-11-19 10:25:32.025412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.670 [2024-11-19 10:25:32.025423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.670 [2024-11-19 10:25:32.025432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.670 [2024-11-19 10:25:32.025444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.670 [2024-11-19 10:25:32.025452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.670 [2024-11-19 10:25:32.025464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.670 [2024-11-19 10:25:32.025474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.670 [2024-11-19 10:25:32.025486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.671 [2024-11-19 10:25:32.025858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.671 [2024-11-19 10:25:32.025878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.671 [2024-11-19 10:25:32.025898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.025981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.025992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.026001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.026012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.026021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.026032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.026041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.026052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.026061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.026072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.026082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.026094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.026103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.026114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.026123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.026134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.026144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.026154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.026164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.026175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.026184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.026195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.026204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.026216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.026225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.026236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.671 [2024-11-19 10:25:32.026245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.026256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.671 [2024-11-19 10:25:32.026265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.026276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.671 [2024-11-19 10:25:32.026284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.026295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.026304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.671 [2024-11-19 10:25:32.026315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.671 [2024-11-19 10:25:32.026324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.672 [2024-11-19 10:25:32.026344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.672 [2024-11-19 10:25:32.026365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.672 [2024-11-19 10:25:32.026384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.672 [2024-11-19 10:25:32.026728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.672 [2024-11-19 10:25:32.026767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.672 [2024-11-19 10:25:32.026787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.672 [2024-11-19 10:25:32.026883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.672 [2024-11-19 10:25:32.026903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.672 [2024-11-19 10:25:32.026923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.026963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.672 [2024-11-19 10:25:32.026984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.026995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.027004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.027015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.027036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.027047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.027056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.027068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.027077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.027088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.027097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.027118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.027127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.027139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.672 [2024-11-19 10:25:32.027147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.672 [2024-11-19 10:25:32.027158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.673 [2024-11-19 10:25:32.027208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.673 [2024-11-19 10:25:32.027268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.673 [2024-11-19 10:25:32.027349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.673 [2024-11-19 10:25:32.027389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.673 [2024-11-19 10:25:32.027408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.673 [2024-11-19 10:25:32.027428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.673 [2024-11-19 10:25:32.027469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.673 [2024-11-19 10:25:32.027514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.673 [2024-11-19 10:25:32.027534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.673 [2024-11-19 10:25:32.027554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.673 [2024-11-19 10:25:32.027654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:12.673 [2024-11-19 10:25:32.027816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d570 is same with the state(5) to be set 00:25:12.673 [2024-11-19 10:25:32.027850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:12.673 [2024-11-19 10:25:32.027858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:12.673 [2024-11-19 10:25:32.027866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120344 len:8 PRP1 0x0 PRP2 0x0 00:25:12.673 [2024-11-19 10:25:32.027875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.673 [2024-11-19 10:25:32.027924] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x220d570 was disconnected and freed. reset controller. 00:25:12.673 [2024-11-19 10:25:32.028175] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.673 [2024-11-19 10:25:32.028269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21df090 (9): Bad file descriptor 00:25:12.673 [2024-11-19 10:25:32.028375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.673 [2024-11-19 10:25:32.028424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:12.673 [2024-11-19 10:25:32.028442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21df090 with addr=10.0.0.2, port=4420 00:25:12.673 [2024-11-19 10:25:32.028453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df090 is same with the state(5) to be set 00:25:12.673 [2024-11-19 10:25:32.028471] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21df090 (9): Bad file descriptor 00:25:12.673 [2024-11-19 10:25:32.028487] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:12.674 [2024-11-19 10:25:32.028496] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:12.674 [2024-11-19 10:25:32.028506] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:12.674 [2024-11-19 10:25:32.028526] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:12.674 [2024-11-19 10:25:32.028537] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:12.674 10:25:32 -- host/timeout.sh@101 -- # sleep 3 00:25:13.609 [2024-11-19 10:25:33.028689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.609 [2024-11-19 10:25:33.029073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.609 [2024-11-19 10:25:33.029108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21df090 with addr=10.0.0.2, port=4420 00:25:13.609 [2024-11-19 10:25:33.029127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df090 is same with the state(5) to be set 00:25:13.609 [2024-11-19 10:25:33.029168] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21df090 (9): Bad file descriptor 00:25:13.609 [2024-11-19 10:25:33.029192] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:13.609 [2024-11-19 10:25:33.029204] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:13.609 [2024-11-19 10:25:33.029218] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.609 [2024-11-19 10:25:33.029250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:13.609 [2024-11-19 10:25:33.029264] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.545 [2024-11-19 10:25:34.029421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.545 [2024-11-19 10:25:34.029524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.545 [2024-11-19 10:25:34.029545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21df090 with addr=10.0.0.2, port=4420 00:25:14.546 [2024-11-19 10:25:34.029560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df090 is same with the state(5) to be set 00:25:14.546 [2024-11-19 10:25:34.029588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21df090 (9): Bad file descriptor 00:25:14.546 [2024-11-19 10:25:34.029608] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.546 [2024-11-19 10:25:34.029618] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.546 [2024-11-19 10:25:34.029629] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.546 [2024-11-19 10:25:34.029657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.546 [2024-11-19 10:25:34.029669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.921 [2024-11-19 10:25:35.031577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.921 [2024-11-19 10:25:35.031685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:15.921 [2024-11-19 10:25:35.031705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21df090 with addr=10.0.0.2, port=4420 00:25:15.921 [2024-11-19 10:25:35.031720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21df090 is same with the state(5) to be set 00:25:15.921 [2024-11-19 10:25:35.031973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21df090 (9): Bad file descriptor 00:25:15.921 [2024-11-19 10:25:35.032170] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:15.921 [2024-11-19 10:25:35.032183] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:15.921 [2024-11-19 10:25:35.032194] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:15.921 [2024-11-19 10:25:35.034504] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:15.921 [2024-11-19 10:25:35.034535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:15.921 10:25:35 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.921 [2024-11-19 10:25:35.331668] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.921 10:25:35 -- host/timeout.sh@103 -- # wait 100094 00:25:16.863 [2024-11-19 10:25:36.063278] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:22.181 00:25:22.181 Latency(us) 00:25:22.181 [2024-11-19T10:25:41.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.181 [2024-11-19T10:25:41.727Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:22.181 Verification LBA range: start 0x0 length 0x4000 00:25:22.181 NVMe0n1 : 10.01 7656.07 29.91 5636.17 0.00 9610.75 647.91 3019898.88 00:25:22.181 [2024-11-19T10:25:41.727Z] =================================================================================================================== 00:25:22.181 [2024-11-19T10:25:41.727Z] Total : 7656.07 29.91 5636.17 0.00 9610.75 0.00 3019898.88 00:25:22.181 0 00:25:22.181 10:25:40 -- host/timeout.sh@105 -- # killprocess 99939 00:25:22.181 10:25:40 -- common/autotest_common.sh@936 -- # '[' -z 99939 ']' 00:25:22.181 10:25:40 -- common/autotest_common.sh@940 -- # kill -0 99939 00:25:22.181 10:25:40 -- common/autotest_common.sh@941 -- # uname 00:25:22.181 10:25:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:22.181 10:25:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99939 00:25:22.181 killing process with pid 99939 00:25:22.181 Received shutdown signal, test time was about 10.000000 seconds 00:25:22.181 00:25:22.181 Latency(us) 00:25:22.181 [2024-11-19T10:25:41.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.181 [2024-11-19T10:25:41.727Z] =================================================================================================================== 00:25:22.181 [2024-11-19T10:25:41.727Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:22.181 10:25:40 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:22.181 10:25:40 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:22.181 10:25:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99939' 00:25:22.181 10:25:40 -- common/autotest_common.sh@955 -- # kill 99939 00:25:22.182 10:25:40 -- common/autotest_common.sh@960 -- # wait 99939 00:25:22.182 10:25:40 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:22.182 10:25:40 -- host/timeout.sh@110 -- # bdevperf_pid=100222 00:25:22.182 10:25:40 -- host/timeout.sh@112 -- # waitforlisten 100222 /var/tmp/bdevperf.sock 00:25:22.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:22.182 10:25:40 -- common/autotest_common.sh@829 -- # '[' -z 100222 ']' 00:25:22.182 10:25:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:22.182 10:25:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.182 10:25:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:22.182 10:25:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.182 10:25:40 -- common/autotest_common.sh@10 -- # set +x 00:25:22.182 [2024-11-19 10:25:41.038329] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:22.182 [2024-11-19 10:25:41.038426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100222 ] 00:25:22.182 [2024-11-19 10:25:41.169868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.182 [2024-11-19 10:25:41.204605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.182 10:25:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.182 10:25:41 -- common/autotest_common.sh@862 -- # return 0 00:25:22.182 10:25:41 -- host/timeout.sh@116 -- # dtrace_pid=100231 00:25:22.182 10:25:41 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 100222 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:25:22.182 10:25:41 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:25:22.182 10:25:41 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:22.440 NVMe0n1 00:25:22.440 10:25:41 -- host/timeout.sh@124 -- # rpc_pid=100283 00:25:22.440 10:25:41 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:22.440 10:25:41 -- host/timeout.sh@125 -- # sleep 1 00:25:22.698 Running I/O for 10 seconds... 00:25:23.637 10:25:42 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.637 [2024-11-19 10:25:43.157555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157623] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157632] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157640] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157649] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157657] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157666] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157674] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157682] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157690] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157698] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157707] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157715] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157723] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157731] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157739] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157756] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157764] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157772] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157780] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157788] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157797] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157834] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157870] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157888] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157901] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157942] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157954] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.637 [2024-11-19 10:25:43.157963] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.157971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.157979] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.157987] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.157995] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158003] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158011] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158028] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158035] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158043] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158059] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158075] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158091] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158101] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158109] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158117] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158133] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158141] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158157] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158166] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158174] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158183] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d4db0 is same with the state(5) to be set 00:25:23.638 [2024-11-19 10:25:43.158418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.158458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.158491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.158510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.158530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.158547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.158565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.158580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.158598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.158615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.158633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.158650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.158668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.158683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.158702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.158719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.158740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.158756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.158774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.158788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.158806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.158840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.158863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.158877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.158895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.158910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.158929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.158944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.158962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.158977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.158995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.159013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.159045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.159064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.159082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.159097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.159115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.159131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.159149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.159164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.159184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.159199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.159217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.159233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.159250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.159266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.159284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.159301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.159318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.159334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.638 [2024-11-19 10:25:43.159352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.638 [2024-11-19 10:25:43.159368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.159977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.159993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.639 [2024-11-19 10:25:43.160699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.639 [2024-11-19 10:25:43.160718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.160733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.160751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.160766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.160784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.160799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.160831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.160852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.160879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.160896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.160914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.160930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.160948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.160964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.160982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.160998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.161972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.161989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.162007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.162022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.162040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.640 [2024-11-19 10:25:43.162055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.640 [2024-11-19 10:25:43.162073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:125312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.641 [2024-11-19 10:25:43.162815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x81fd10 is same with the state(5) to be set 00:25:23.641 [2024-11-19 10:25:43.162877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:23.641 [2024-11-19 10:25:43.162890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:23.641 [2024-11-19 10:25:43.162905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1928 len:8 PRP1 0x0 PRP2 0x0 00:25:23.641 [2024-11-19 10:25:43.162920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.641 [2024-11-19 10:25:43.162977] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x81fd10 was disconnected and freed. reset controller. 00:25:23.641 [2024-11-19 10:25:43.163368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:23.641 [2024-11-19 10:25:43.163492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ee0b0 (9): Bad file descriptor 00:25:23.641 [2024-11-19 10:25:43.163665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.641 [2024-11-19 10:25:43.163756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.641 [2024-11-19 10:25:43.163793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ee0b0 with addr=10.0.0.2, port=4420 00:25:23.641 [2024-11-19 10:25:43.163813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee0b0 is same with the state(5) to be set 00:25:23.641 [2024-11-19 10:25:43.163864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ee0b0 (9): Bad file descriptor 00:25:23.641 [2024-11-19 10:25:43.163891] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:23.641 [2024-11-19 10:25:43.163908] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:23.641 [2024-11-19 10:25:43.163925] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:23.641 [2024-11-19 10:25:43.163957] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:23.641 [2024-11-19 10:25:43.163975] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:23.903 10:25:43 -- host/timeout.sh@128 -- # wait 100283 00:25:25.805 [2024-11-19 10:25:45.164165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:25.805 [2024-11-19 10:25:45.164282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:25.805 [2024-11-19 10:25:45.164303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ee0b0 with addr=10.0.0.2, port=4420 00:25:25.805 [2024-11-19 10:25:45.164318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee0b0 is same with the state(5) to be set 00:25:25.805 [2024-11-19 10:25:45.164348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ee0b0 (9): Bad file descriptor 00:25:25.805 [2024-11-19 10:25:45.164368] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:25.805 [2024-11-19 10:25:45.164379] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:25.805 [2024-11-19 10:25:45.164389] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:25.805 [2024-11-19 10:25:45.164418] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:25.805 [2024-11-19 10:25:45.164430] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.707 [2024-11-19 10:25:47.164613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.707 [2024-11-19 10:25:47.164722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.707 [2024-11-19 10:25:47.164744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ee0b0 with addr=10.0.0.2, port=4420 00:25:27.707 [2024-11-19 10:25:47.164759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ee0b0 is same with the state(5) to be set 00:25:27.707 [2024-11-19 10:25:47.164786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ee0b0 (9): Bad file descriptor 00:25:27.707 [2024-11-19 10:25:47.164832] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.707 [2024-11-19 10:25:47.164846] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.707 [2024-11-19 10:25:47.164858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.707 [2024-11-19 10:25:47.164886] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.707 [2024-11-19 10:25:47.164905] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.235 [2024-11-19 10:25:49.164974] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.235 [2024-11-19 10:25:49.165031] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.235 [2024-11-19 10:25:49.165044] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.235 [2024-11-19 10:25:49.165055] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:30.235 [2024-11-19 10:25:49.165083] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.802 00:25:30.802 Latency(us) 00:25:30.802 [2024-11-19T10:25:50.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.802 [2024-11-19T10:25:50.348Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:25:30.802 NVMe0n1 : 8.12 2485.80 9.71 15.75 0.00 51086.63 2398.02 7015926.69 00:25:30.802 [2024-11-19T10:25:50.348Z] =================================================================================================================== 00:25:30.802 [2024-11-19T10:25:50.348Z] Total : 2485.80 9.71 15.75 0.00 51086.63 2398.02 7015926.69 00:25:30.802 0 00:25:30.802 10:25:50 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:30.802 Attaching 5 probes... 00:25:30.802 1347.958864: reset bdev controller NVMe0 00:25:30.802 1348.172657: reconnect bdev controller NVMe0 00:25:30.802 3348.622132: reconnect delay bdev controller NVMe0 00:25:30.802 3348.648969: reconnect bdev controller NVMe0 00:25:30.802 5349.072368: reconnect delay bdev controller NVMe0 00:25:30.802 5349.099528: reconnect bdev controller NVMe0 00:25:30.802 7349.552628: reconnect delay bdev controller NVMe0 00:25:30.802 7349.577617: reconnect bdev controller NVMe0 00:25:30.802 10:25:50 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:25:30.802 10:25:50 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:25:30.802 10:25:50 -- host/timeout.sh@136 -- # kill 100231 00:25:30.802 10:25:50 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:30.802 10:25:50 -- host/timeout.sh@139 -- # killprocess 100222 00:25:30.802 10:25:50 -- common/autotest_common.sh@936 -- # '[' -z 100222 ']' 00:25:30.802 10:25:50 -- common/autotest_common.sh@940 -- # kill -0 100222 00:25:30.802 10:25:50 -- common/autotest_common.sh@941 -- # uname 00:25:30.802 10:25:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:30.802 10:25:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100222 00:25:30.802 killing process with pid 100222 00:25:30.802 Received shutdown signal, test time was about 8.195630 seconds 00:25:30.802 00:25:30.802 Latency(us) 00:25:30.802 [2024-11-19T10:25:50.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.802 [2024-11-19T10:25:50.348Z] =================================================================================================================== 00:25:30.802 [2024-11-19T10:25:50.348Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:30.802 10:25:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:30.802 10:25:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:30.802 10:25:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100222' 00:25:30.802 10:25:50 -- common/autotest_common.sh@955 -- # kill 100222 00:25:30.802 10:25:50 -- common/autotest_common.sh@960 -- # wait 100222 00:25:31.061 10:25:50 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:31.320 10:25:50 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:25:31.320 10:25:50 -- host/timeout.sh@145 -- # nvmftestfini 00:25:31.320 10:25:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:31.320 10:25:50 -- nvmf/common.sh@116 -- # sync 00:25:31.320 10:25:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:31.320 10:25:50 -- nvmf/common.sh@119 -- # set +e 00:25:31.320 10:25:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:31.320 10:25:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:31.320 rmmod nvme_tcp 00:25:31.320 rmmod nvme_fabrics 00:25:31.320 rmmod nvme_keyring 00:25:31.320 10:25:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:31.320 10:25:50 -- nvmf/common.sh@123 -- # set -e 00:25:31.320 10:25:50 -- nvmf/common.sh@124 -- # return 0 00:25:31.320 10:25:50 -- nvmf/common.sh@477 -- # '[' -n 99641 ']' 00:25:31.320 10:25:50 -- nvmf/common.sh@478 -- # killprocess 99641 00:25:31.320 10:25:50 -- common/autotest_common.sh@936 -- # '[' -z 99641 ']' 00:25:31.320 10:25:50 -- common/autotest_common.sh@940 -- # kill -0 99641 00:25:31.320 10:25:50 -- common/autotest_common.sh@941 -- # uname 00:25:31.320 10:25:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:31.320 10:25:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 99641 00:25:31.320 killing process with pid 99641 00:25:31.320 10:25:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:31.320 10:25:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:31.320 10:25:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 99641' 00:25:31.320 10:25:50 -- common/autotest_common.sh@955 -- # kill 99641 00:25:31.320 10:25:50 -- common/autotest_common.sh@960 -- # wait 99641 00:25:31.579 10:25:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:31.579 10:25:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:31.579 10:25:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:31.579 10:25:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.579 10:25:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:31.579 10:25:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.579 10:25:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:31.579 10:25:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.579 10:25:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:31.579 00:25:31.579 real 0m46.396s 00:25:31.579 user 2m17.297s 00:25:31.579 sys 0m4.822s 00:25:31.579 10:25:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:31.579 ************************************ 00:25:31.579 END TEST nvmf_timeout 00:25:31.579 ************************************ 00:25:31.579 10:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:31.579 10:25:51 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:25:31.579 10:25:51 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:25:31.579 10:25:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:31.579 10:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:31.579 10:25:51 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:31.579 00:25:31.579 real 17m17.233s 00:25:31.579 user 55m44.542s 00:25:31.579 sys 3m45.007s 00:25:31.579 10:25:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:31.579 10:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:31.579 ************************************ 00:25:31.579 END TEST nvmf_tcp 00:25:31.579 ************************************ 00:25:31.579 10:25:51 -- spdk/autotest.sh@283 -- # [[ 0 -eq 0 ]] 00:25:31.579 10:25:51 -- spdk/autotest.sh@284 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:31.579 10:25:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:31.579 10:25:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:31.579 10:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:31.838 ************************************ 00:25:31.839 START TEST spdkcli_nvmf_tcp 00:25:31.839 ************************************ 00:25:31.839 10:25:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:31.839 * Looking for test storage... 00:25:31.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:25:31.839 10:25:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:31.839 10:25:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:31.839 10:25:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:31.839 10:25:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:31.839 10:25:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:31.839 10:25:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:31.839 10:25:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:31.839 10:25:51 -- scripts/common.sh@335 -- # IFS=.-: 00:25:31.839 10:25:51 -- scripts/common.sh@335 -- # read -ra ver1 00:25:31.839 10:25:51 -- scripts/common.sh@336 -- # IFS=.-: 00:25:31.839 10:25:51 -- scripts/common.sh@336 -- # read -ra ver2 00:25:31.839 10:25:51 -- scripts/common.sh@337 -- # local 'op=<' 00:25:31.839 10:25:51 -- scripts/common.sh@339 -- # ver1_l=2 00:25:31.839 10:25:51 -- scripts/common.sh@340 -- # ver2_l=1 00:25:31.839 10:25:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:31.839 10:25:51 -- scripts/common.sh@343 -- # case "$op" in 00:25:31.839 10:25:51 -- scripts/common.sh@344 -- # : 1 00:25:31.839 10:25:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:31.839 10:25:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.839 10:25:51 -- scripts/common.sh@364 -- # decimal 1 00:25:31.839 10:25:51 -- scripts/common.sh@352 -- # local d=1 00:25:31.839 10:25:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:31.839 10:25:51 -- scripts/common.sh@354 -- # echo 1 00:25:31.839 10:25:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:31.839 10:25:51 -- scripts/common.sh@365 -- # decimal 2 00:25:31.839 10:25:51 -- scripts/common.sh@352 -- # local d=2 00:25:31.839 10:25:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:31.839 10:25:51 -- scripts/common.sh@354 -- # echo 2 00:25:31.839 10:25:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:31.839 10:25:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:31.839 10:25:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:31.839 10:25:51 -- scripts/common.sh@367 -- # return 0 00:25:31.839 10:25:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:31.839 10:25:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:31.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.839 --rc genhtml_branch_coverage=1 00:25:31.839 --rc genhtml_function_coverage=1 00:25:31.839 --rc genhtml_legend=1 00:25:31.839 --rc geninfo_all_blocks=1 00:25:31.839 --rc geninfo_unexecuted_blocks=1 00:25:31.839 00:25:31.839 ' 00:25:31.839 10:25:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:31.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.839 --rc genhtml_branch_coverage=1 00:25:31.839 --rc genhtml_function_coverage=1 00:25:31.839 --rc genhtml_legend=1 00:25:31.839 --rc geninfo_all_blocks=1 00:25:31.839 --rc geninfo_unexecuted_blocks=1 00:25:31.839 00:25:31.839 ' 00:25:31.839 10:25:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:31.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.839 --rc genhtml_branch_coverage=1 00:25:31.839 --rc genhtml_function_coverage=1 00:25:31.839 --rc genhtml_legend=1 00:25:31.839 --rc geninfo_all_blocks=1 00:25:31.839 --rc geninfo_unexecuted_blocks=1 00:25:31.839 00:25:31.839 ' 00:25:31.839 10:25:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:31.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.839 --rc genhtml_branch_coverage=1 00:25:31.839 --rc genhtml_function_coverage=1 00:25:31.839 --rc genhtml_legend=1 00:25:31.839 --rc geninfo_all_blocks=1 00:25:31.839 --rc geninfo_unexecuted_blocks=1 00:25:31.839 00:25:31.839 ' 00:25:31.839 10:25:51 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:25:31.839 10:25:51 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:25:31.839 10:25:51 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:25:31.839 10:25:51 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:31.839 10:25:51 -- nvmf/common.sh@7 -- # uname -s 00:25:31.839 10:25:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.839 10:25:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.839 10:25:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.839 10:25:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.839 10:25:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.839 10:25:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.839 10:25:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.839 10:25:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.839 10:25:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.839 10:25:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.839 10:25:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:25:31.839 10:25:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:25:31.839 10:25:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.839 10:25:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.839 10:25:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:31.839 10:25:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:31.839 10:25:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.839 10:25:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.839 10:25:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.839 10:25:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.839 10:25:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.839 10:25:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.839 10:25:51 -- paths/export.sh@5 -- # export PATH 00:25:31.839 10:25:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.839 10:25:51 -- nvmf/common.sh@46 -- # : 0 00:25:31.839 10:25:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:31.839 10:25:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:31.839 10:25:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:31.839 10:25:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.839 10:25:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.839 10:25:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:31.839 10:25:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:31.839 10:25:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:31.839 10:25:51 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:31.839 10:25:51 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:31.839 10:25:51 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:31.839 10:25:51 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:31.839 10:25:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:31.839 10:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:31.839 10:25:51 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:31.839 10:25:51 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=100509 00:25:31.839 10:25:51 -- spdkcli/common.sh@34 -- # waitforlisten 100509 00:25:31.839 10:25:51 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:31.839 10:25:51 -- common/autotest_common.sh@829 -- # '[' -z 100509 ']' 00:25:31.839 10:25:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.840 10:25:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:31.840 10:25:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.840 10:25:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:31.840 10:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:32.098 [2024-11-19 10:25:51.383766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:32.098 [2024-11-19 10:25:51.383887] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100509 ] 00:25:32.098 [2024-11-19 10:25:51.516294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:32.098 [2024-11-19 10:25:51.552586] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:32.098 [2024-11-19 10:25:51.552871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.098 [2024-11-19 10:25:51.552885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.357 10:25:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:32.357 10:25:51 -- common/autotest_common.sh@862 -- # return 0 00:25:32.357 10:25:51 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:32.357 10:25:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:32.357 10:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:32.357 10:25:51 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:32.357 10:25:51 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:32.357 10:25:51 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:32.357 10:25:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:32.357 10:25:51 -- common/autotest_common.sh@10 -- # set +x 00:25:32.357 10:25:51 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:32.357 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:32.357 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:32.357 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:32.357 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:32.357 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:32.357 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:32.357 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:32.357 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:32.357 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:32.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:32.357 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:32.357 ' 00:25:32.616 [2024-11-19 10:25:52.124565] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:35.294 [2024-11-19 10:25:54.372344] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.228 [2024-11-19 10:25:55.673508] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:38.756 [2024-11-19 10:25:58.095310] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:40.658 [2024-11-19 10:26:00.184939] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:42.563 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:42.563 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:42.563 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:42.563 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:42.563 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:42.563 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:42.563 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:42.563 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:42.563 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:42.563 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:42.563 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:42.563 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:42.563 10:26:01 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:42.563 10:26:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:42.563 10:26:01 -- common/autotest_common.sh@10 -- # set +x 00:25:42.563 10:26:01 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:42.563 10:26:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:42.563 10:26:01 -- common/autotest_common.sh@10 -- # set +x 00:25:42.563 10:26:01 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:42.563 10:26:01 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:43.132 10:26:02 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:43.132 10:26:02 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:43.132 10:26:02 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:43.132 10:26:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:43.132 10:26:02 -- common/autotest_common.sh@10 -- # set +x 00:25:43.132 10:26:02 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:43.132 10:26:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:43.133 10:26:02 -- common/autotest_common.sh@10 -- # set +x 00:25:43.133 10:26:02 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:43.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:43.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:43.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:43.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:43.133 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:43.133 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:43.133 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:43.133 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:43.133 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:43.133 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:43.133 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:43.133 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:43.133 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:43.133 ' 00:25:48.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:48.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:48.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:48.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:48.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:48.403 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:48.403 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:48.403 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:48.403 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:48.403 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:48.403 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:48.403 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:48.403 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:48.403 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:48.662 10:26:08 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:48.662 10:26:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:48.662 10:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:48.662 10:26:08 -- spdkcli/nvmf.sh@90 -- # killprocess 100509 00:25:48.662 10:26:08 -- common/autotest_common.sh@936 -- # '[' -z 100509 ']' 00:25:48.662 10:26:08 -- common/autotest_common.sh@940 -- # kill -0 100509 00:25:48.662 10:26:08 -- common/autotest_common.sh@941 -- # uname 00:25:48.662 10:26:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:48.662 10:26:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100509 00:25:48.662 10:26:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:48.662 10:26:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:48.662 killing process with pid 100509 00:25:48.662 10:26:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100509' 00:25:48.662 10:26:08 -- common/autotest_common.sh@955 -- # kill 100509 00:25:48.662 [2024-11-19 10:26:08.066968] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:48.662 10:26:08 -- common/autotest_common.sh@960 -- # wait 100509 00:25:48.662 10:26:08 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:48.922 10:26:08 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:48.922 10:26:08 -- spdkcli/common.sh@13 -- # '[' -n 100509 ']' 00:25:48.922 10:26:08 -- spdkcli/common.sh@14 -- # killprocess 100509 00:25:48.923 10:26:08 -- common/autotest_common.sh@936 -- # '[' -z 100509 ']' 00:25:48.923 10:26:08 -- common/autotest_common.sh@940 -- # kill -0 100509 00:25:48.923 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (100509) - No such process 00:25:48.923 Process with pid 100509 is not found 00:25:48.923 10:26:08 -- common/autotest_common.sh@963 -- # echo 'Process with pid 100509 is not found' 00:25:48.923 10:26:08 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:48.923 10:26:08 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:48.923 10:26:08 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:48.923 00:25:48.923 real 0m17.091s 00:25:48.923 user 0m37.288s 00:25:48.923 sys 0m0.787s 00:25:48.923 10:26:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:48.923 10:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:48.923 ************************************ 00:25:48.923 END TEST spdkcli_nvmf_tcp 00:25:48.923 ************************************ 00:25:48.923 10:26:08 -- spdk/autotest.sh@285 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:48.923 10:26:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:48.923 10:26:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:48.923 10:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:48.923 ************************************ 00:25:48.923 START TEST nvmf_identify_passthru 00:25:48.923 ************************************ 00:25:48.923 10:26:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:48.923 * Looking for test storage... 00:25:48.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:48.923 10:26:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:48.923 10:26:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:48.923 10:26:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:48.923 10:26:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:48.923 10:26:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:48.923 10:26:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:48.923 10:26:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:48.923 10:26:08 -- scripts/common.sh@335 -- # IFS=.-: 00:25:48.923 10:26:08 -- scripts/common.sh@335 -- # read -ra ver1 00:25:48.923 10:26:08 -- scripts/common.sh@336 -- # IFS=.-: 00:25:48.923 10:26:08 -- scripts/common.sh@336 -- # read -ra ver2 00:25:48.923 10:26:08 -- scripts/common.sh@337 -- # local 'op=<' 00:25:48.923 10:26:08 -- scripts/common.sh@339 -- # ver1_l=2 00:25:48.923 10:26:08 -- scripts/common.sh@340 -- # ver2_l=1 00:25:48.923 10:26:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:48.923 10:26:08 -- scripts/common.sh@343 -- # case "$op" in 00:25:48.923 10:26:08 -- scripts/common.sh@344 -- # : 1 00:25:48.923 10:26:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:48.923 10:26:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:48.923 10:26:08 -- scripts/common.sh@364 -- # decimal 1 00:25:48.923 10:26:08 -- scripts/common.sh@352 -- # local d=1 00:25:48.923 10:26:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:48.923 10:26:08 -- scripts/common.sh@354 -- # echo 1 00:25:48.923 10:26:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:48.923 10:26:08 -- scripts/common.sh@365 -- # decimal 2 00:25:48.923 10:26:08 -- scripts/common.sh@352 -- # local d=2 00:25:48.923 10:26:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:48.923 10:26:08 -- scripts/common.sh@354 -- # echo 2 00:25:48.923 10:26:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:48.923 10:26:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:48.923 10:26:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:48.923 10:26:08 -- scripts/common.sh@367 -- # return 0 00:25:48.923 10:26:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:48.923 10:26:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:48.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.923 --rc genhtml_branch_coverage=1 00:25:48.923 --rc genhtml_function_coverage=1 00:25:48.923 --rc genhtml_legend=1 00:25:48.923 --rc geninfo_all_blocks=1 00:25:48.923 --rc geninfo_unexecuted_blocks=1 00:25:48.923 00:25:48.923 ' 00:25:48.923 10:26:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:48.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.923 --rc genhtml_branch_coverage=1 00:25:48.923 --rc genhtml_function_coverage=1 00:25:48.923 --rc genhtml_legend=1 00:25:48.923 --rc geninfo_all_blocks=1 00:25:48.923 --rc geninfo_unexecuted_blocks=1 00:25:48.923 00:25:48.923 ' 00:25:48.923 10:26:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:48.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.923 --rc genhtml_branch_coverage=1 00:25:48.923 --rc genhtml_function_coverage=1 00:25:48.923 --rc genhtml_legend=1 00:25:48.923 --rc geninfo_all_blocks=1 00:25:48.923 --rc geninfo_unexecuted_blocks=1 00:25:48.923 00:25:48.923 ' 00:25:48.923 10:26:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:48.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:48.923 --rc genhtml_branch_coverage=1 00:25:48.923 --rc genhtml_function_coverage=1 00:25:48.923 --rc genhtml_legend=1 00:25:48.923 --rc geninfo_all_blocks=1 00:25:48.923 --rc geninfo_unexecuted_blocks=1 00:25:48.923 00:25:48.923 ' 00:25:48.923 10:26:08 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:48.923 10:26:08 -- nvmf/common.sh@7 -- # uname -s 00:25:48.923 10:26:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.923 10:26:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.923 10:26:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.923 10:26:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.923 10:26:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.923 10:26:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.923 10:26:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.923 10:26:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.923 10:26:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.923 10:26:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.923 10:26:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:25:48.923 10:26:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:25:48.923 10:26:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.923 10:26:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.923 10:26:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:48.923 10:26:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:48.923 10:26:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.923 10:26:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.923 10:26:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.923 10:26:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.923 10:26:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.923 10:26:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.923 10:26:08 -- paths/export.sh@5 -- # export PATH 00:25:48.923 10:26:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.923 10:26:08 -- nvmf/common.sh@46 -- # : 0 00:25:48.923 10:26:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:48.923 10:26:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:48.923 10:26:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:48.923 10:26:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.923 10:26:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.923 10:26:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:48.923 10:26:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:48.923 10:26:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:48.923 10:26:08 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:48.923 10:26:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.923 10:26:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.923 10:26:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.923 10:26:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.923 10:26:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.924 10:26:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.924 10:26:08 -- paths/export.sh@5 -- # export PATH 00:25:48.924 10:26:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.924 10:26:08 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:48.924 10:26:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:48.924 10:26:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.924 10:26:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:48.924 10:26:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:48.924 10:26:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:49.182 10:26:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.182 10:26:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:49.182 10:26:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.182 10:26:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:49.182 10:26:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:49.182 10:26:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:49.182 10:26:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:49.182 10:26:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:49.182 10:26:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:49.182 10:26:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.182 10:26:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.182 10:26:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:49.182 10:26:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:49.182 10:26:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:49.182 10:26:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:49.182 10:26:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:49.182 10:26:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.182 10:26:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:49.182 10:26:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:49.182 10:26:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:49.182 10:26:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:49.182 10:26:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:49.182 10:26:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:49.182 Cannot find device "nvmf_tgt_br" 00:25:49.182 10:26:08 -- nvmf/common.sh@154 -- # true 00:25:49.182 10:26:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:49.182 Cannot find device "nvmf_tgt_br2" 00:25:49.182 10:26:08 -- nvmf/common.sh@155 -- # true 00:25:49.182 10:26:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:49.182 10:26:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:49.182 Cannot find device "nvmf_tgt_br" 00:25:49.182 10:26:08 -- nvmf/common.sh@157 -- # true 00:25:49.182 10:26:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:49.182 Cannot find device "nvmf_tgt_br2" 00:25:49.182 10:26:08 -- nvmf/common.sh@158 -- # true 00:25:49.182 10:26:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:49.182 10:26:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:49.182 10:26:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:49.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:49.182 10:26:08 -- nvmf/common.sh@161 -- # true 00:25:49.182 10:26:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:49.182 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:49.182 10:26:08 -- nvmf/common.sh@162 -- # true 00:25:49.182 10:26:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:49.182 10:26:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:49.182 10:26:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:49.183 10:26:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:49.183 10:26:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:49.183 10:26:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:49.183 10:26:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:49.183 10:26:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:49.183 10:26:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:49.183 10:26:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:49.183 10:26:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:49.183 10:26:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:49.183 10:26:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:49.183 10:26:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:49.183 10:26:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:49.183 10:26:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:49.183 10:26:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:49.183 10:26:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:49.183 10:26:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:49.442 10:26:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:49.442 10:26:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:49.442 10:26:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:49.442 10:26:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:49.442 10:26:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:49.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:25:49.442 00:25:49.442 --- 10.0.0.2 ping statistics --- 00:25:49.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.442 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:25:49.442 10:26:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:49.442 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:49.442 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:25:49.442 00:25:49.442 --- 10.0.0.3 ping statistics --- 00:25:49.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.442 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:25:49.442 10:26:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:49.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:25:49.442 00:25:49.442 --- 10.0.0.1 ping statistics --- 00:25:49.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.442 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:49.442 10:26:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.442 10:26:08 -- nvmf/common.sh@421 -- # return 0 00:25:49.442 10:26:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:49.442 10:26:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.442 10:26:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:49.442 10:26:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:49.442 10:26:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.442 10:26:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:49.442 10:26:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:49.442 10:26:08 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:49.442 10:26:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:49.442 10:26:08 -- common/autotest_common.sh@10 -- # set +x 00:25:49.442 10:26:08 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:49.442 10:26:08 -- common/autotest_common.sh@1519 -- # bdfs=() 00:25:49.442 10:26:08 -- common/autotest_common.sh@1519 -- # local bdfs 00:25:49.442 10:26:08 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:25:49.442 10:26:08 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:25:49.442 10:26:08 -- common/autotest_common.sh@1508 -- # bdfs=() 00:25:49.442 10:26:08 -- common/autotest_common.sh@1508 -- # local bdfs 00:25:49.442 10:26:08 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:49.442 10:26:08 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:49.442 10:26:08 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:25:49.442 10:26:08 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:25:49.442 10:26:08 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:49.442 10:26:08 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:25:49.442 10:26:08 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:49.442 10:26:08 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:49.442 10:26:08 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:49.442 10:26:08 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:49.442 10:26:08 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:49.702 10:26:09 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:49.702 10:26:09 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:49.702 10:26:09 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:49.702 10:26:09 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:49.702 10:26:09 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:49.702 10:26:09 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:49.702 10:26:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:49.702 10:26:09 -- common/autotest_common.sh@10 -- # set +x 00:25:49.961 10:26:09 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:49.961 10:26:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:49.961 10:26:09 -- common/autotest_common.sh@10 -- # set +x 00:25:49.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.961 10:26:09 -- target/identify_passthru.sh@31 -- # nvmfpid=100995 00:25:49.961 10:26:09 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:49.961 10:26:09 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:49.961 10:26:09 -- target/identify_passthru.sh@35 -- # waitforlisten 100995 00:25:49.961 10:26:09 -- common/autotest_common.sh@829 -- # '[' -z 100995 ']' 00:25:49.961 10:26:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.961 10:26:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:49.961 10:26:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.961 10:26:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:49.961 10:26:09 -- common/autotest_common.sh@10 -- # set +x 00:25:49.961 [2024-11-19 10:26:09.322629] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:49.961 [2024-11-19 10:26:09.322881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.961 [2024-11-19 10:26:09.463936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:49.961 [2024-11-19 10:26:09.500938] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:49.961 [2024-11-19 10:26:09.501239] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.961 [2024-11-19 10:26:09.501295] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.961 [2024-11-19 10:26:09.501432] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.961 [2024-11-19 10:26:09.501583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.961 [2024-11-19 10:26:09.501660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.961 [2024-11-19 10:26:09.502547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.961 [2024-11-19 10:26:09.502548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.239 10:26:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.239 10:26:09 -- common/autotest_common.sh@862 -- # return 0 00:25:50.239 10:26:09 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:50.239 10:26:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.239 10:26:09 -- common/autotest_common.sh@10 -- # set +x 00:25:50.239 10:26:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.239 10:26:09 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:50.239 10:26:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.239 10:26:09 -- common/autotest_common.sh@10 -- # set +x 00:25:50.239 [2024-11-19 10:26:09.636971] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:50.239 10:26:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.239 10:26:09 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:50.239 10:26:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.239 10:26:09 -- common/autotest_common.sh@10 -- # set +x 00:25:50.239 [2024-11-19 10:26:09.650247] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.239 10:26:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.239 10:26:09 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:50.239 10:26:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:50.239 10:26:09 -- common/autotest_common.sh@10 -- # set +x 00:25:50.239 10:26:09 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:50.239 10:26:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.239 10:26:09 -- common/autotest_common.sh@10 -- # set +x 00:25:50.239 Nvme0n1 00:25:50.239 10:26:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.239 10:26:09 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:50.239 10:26:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.239 10:26:09 -- common/autotest_common.sh@10 -- # set +x 00:25:50.239 10:26:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.239 10:26:09 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:50.239 10:26:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.239 10:26:09 -- common/autotest_common.sh@10 -- # set +x 00:25:50.498 10:26:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.498 10:26:09 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.498 10:26:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.498 10:26:09 -- common/autotest_common.sh@10 -- # set +x 00:25:50.498 [2024-11-19 10:26:09.787479] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.498 10:26:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.498 10:26:09 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:50.498 10:26:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.498 10:26:09 -- common/autotest_common.sh@10 -- # set +x 00:25:50.498 [2024-11-19 10:26:09.795204] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:50.498 [ 00:25:50.498 { 00:25:50.498 "allow_any_host": true, 00:25:50.498 "hosts": [], 00:25:50.498 "listen_addresses": [], 00:25:50.498 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:50.498 "subtype": "Discovery" 00:25:50.498 }, 00:25:50.498 { 00:25:50.498 "allow_any_host": true, 00:25:50.498 "hosts": [], 00:25:50.498 "listen_addresses": [ 00:25:50.498 { 00:25:50.498 "adrfam": "IPv4", 00:25:50.498 "traddr": "10.0.0.2", 00:25:50.498 "transport": "TCP", 00:25:50.498 "trsvcid": "4420", 00:25:50.498 "trtype": "TCP" 00:25:50.498 } 00:25:50.498 ], 00:25:50.498 "max_cntlid": 65519, 00:25:50.498 "max_namespaces": 1, 00:25:50.498 "min_cntlid": 1, 00:25:50.498 "model_number": "SPDK bdev Controller", 00:25:50.498 "namespaces": [ 00:25:50.498 { 00:25:50.498 "bdev_name": "Nvme0n1", 00:25:50.498 "name": "Nvme0n1", 00:25:50.498 "nguid": "D13C55D9928C48C286B3B9F534AB018C", 00:25:50.498 "nsid": 1, 00:25:50.498 "uuid": "d13c55d9-928c-48c2-86b3-b9f534ab018c" 00:25:50.498 } 00:25:50.498 ], 00:25:50.498 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:50.498 "serial_number": "SPDK00000000000001", 00:25:50.498 "subtype": "NVMe" 00:25:50.498 } 00:25:50.498 ] 00:25:50.498 10:26:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.498 10:26:09 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:50.498 10:26:09 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:50.498 10:26:09 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:50.498 10:26:10 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:50.498 10:26:10 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:50.498 10:26:10 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:50.498 10:26:10 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:50.756 10:26:10 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:50.756 10:26:10 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:50.756 10:26:10 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:50.756 10:26:10 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:50.756 10:26:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.756 10:26:10 -- common/autotest_common.sh@10 -- # set +x 00:25:50.756 10:26:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.756 10:26:10 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:50.756 10:26:10 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:50.756 10:26:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:50.756 10:26:10 -- nvmf/common.sh@116 -- # sync 00:25:51.015 10:26:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:51.015 10:26:10 -- nvmf/common.sh@119 -- # set +e 00:25:51.015 10:26:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:51.015 10:26:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:51.015 rmmod nvme_tcp 00:25:51.015 rmmod nvme_fabrics 00:25:51.015 rmmod nvme_keyring 00:25:51.015 10:26:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:51.015 10:26:10 -- nvmf/common.sh@123 -- # set -e 00:25:51.015 10:26:10 -- nvmf/common.sh@124 -- # return 0 00:25:51.015 10:26:10 -- nvmf/common.sh@477 -- # '[' -n 100995 ']' 00:25:51.015 10:26:10 -- nvmf/common.sh@478 -- # killprocess 100995 00:25:51.015 10:26:10 -- common/autotest_common.sh@936 -- # '[' -z 100995 ']' 00:25:51.015 10:26:10 -- common/autotest_common.sh@940 -- # kill -0 100995 00:25:51.015 10:26:10 -- common/autotest_common.sh@941 -- # uname 00:25:51.015 10:26:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:51.015 10:26:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 100995 00:25:51.015 10:26:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:51.015 10:26:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:51.015 killing process with pid 100995 00:25:51.015 10:26:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 100995' 00:25:51.015 10:26:10 -- common/autotest_common.sh@955 -- # kill 100995 00:25:51.015 [2024-11-19 10:26:10.407728] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:51.015 10:26:10 -- common/autotest_common.sh@960 -- # wait 100995 00:25:51.015 10:26:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:51.015 10:26:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:51.015 10:26:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:51.015 10:26:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:51.015 10:26:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:51.015 10:26:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.015 10:26:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:51.015 10:26:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.274 10:26:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:51.274 00:25:51.274 real 0m2.316s 00:25:51.274 user 0m4.597s 00:25:51.274 sys 0m0.734s 00:25:51.274 10:26:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:51.274 ************************************ 00:25:51.274 END TEST nvmf_identify_passthru 00:25:51.274 ************************************ 00:25:51.274 10:26:10 -- common/autotest_common.sh@10 -- # set +x 00:25:51.274 10:26:10 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:51.274 10:26:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:51.274 10:26:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:51.274 10:26:10 -- common/autotest_common.sh@10 -- # set +x 00:25:51.274 ************************************ 00:25:51.274 START TEST nvmf_dif 00:25:51.274 ************************************ 00:25:51.274 10:26:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:51.274 * Looking for test storage... 00:25:51.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:51.274 10:26:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:51.274 10:26:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:51.274 10:26:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:51.274 10:26:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:51.274 10:26:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:51.274 10:26:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:51.274 10:26:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:51.274 10:26:10 -- scripts/common.sh@335 -- # IFS=.-: 00:25:51.274 10:26:10 -- scripts/common.sh@335 -- # read -ra ver1 00:25:51.274 10:26:10 -- scripts/common.sh@336 -- # IFS=.-: 00:25:51.274 10:26:10 -- scripts/common.sh@336 -- # read -ra ver2 00:25:51.274 10:26:10 -- scripts/common.sh@337 -- # local 'op=<' 00:25:51.274 10:26:10 -- scripts/common.sh@339 -- # ver1_l=2 00:25:51.274 10:26:10 -- scripts/common.sh@340 -- # ver2_l=1 00:25:51.274 10:26:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:51.274 10:26:10 -- scripts/common.sh@343 -- # case "$op" in 00:25:51.274 10:26:10 -- scripts/common.sh@344 -- # : 1 00:25:51.274 10:26:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:51.274 10:26:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:51.274 10:26:10 -- scripts/common.sh@364 -- # decimal 1 00:25:51.274 10:26:10 -- scripts/common.sh@352 -- # local d=1 00:25:51.274 10:26:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:51.274 10:26:10 -- scripts/common.sh@354 -- # echo 1 00:25:51.274 10:26:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:51.274 10:26:10 -- scripts/common.sh@365 -- # decimal 2 00:25:51.274 10:26:10 -- scripts/common.sh@352 -- # local d=2 00:25:51.274 10:26:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:51.274 10:26:10 -- scripts/common.sh@354 -- # echo 2 00:25:51.274 10:26:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:51.274 10:26:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:51.274 10:26:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:51.274 10:26:10 -- scripts/common.sh@367 -- # return 0 00:25:51.274 10:26:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:51.274 10:26:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:51.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.274 --rc genhtml_branch_coverage=1 00:25:51.274 --rc genhtml_function_coverage=1 00:25:51.274 --rc genhtml_legend=1 00:25:51.274 --rc geninfo_all_blocks=1 00:25:51.274 --rc geninfo_unexecuted_blocks=1 00:25:51.274 00:25:51.274 ' 00:25:51.274 10:26:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:51.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.274 --rc genhtml_branch_coverage=1 00:25:51.274 --rc genhtml_function_coverage=1 00:25:51.274 --rc genhtml_legend=1 00:25:51.274 --rc geninfo_all_blocks=1 00:25:51.274 --rc geninfo_unexecuted_blocks=1 00:25:51.274 00:25:51.274 ' 00:25:51.274 10:26:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:51.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.274 --rc genhtml_branch_coverage=1 00:25:51.274 --rc genhtml_function_coverage=1 00:25:51.274 --rc genhtml_legend=1 00:25:51.274 --rc geninfo_all_blocks=1 00:25:51.274 --rc geninfo_unexecuted_blocks=1 00:25:51.274 00:25:51.274 ' 00:25:51.274 10:26:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:51.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.274 --rc genhtml_branch_coverage=1 00:25:51.274 --rc genhtml_function_coverage=1 00:25:51.274 --rc genhtml_legend=1 00:25:51.274 --rc geninfo_all_blocks=1 00:25:51.274 --rc geninfo_unexecuted_blocks=1 00:25:51.274 00:25:51.274 ' 00:25:51.274 10:26:10 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:51.274 10:26:10 -- nvmf/common.sh@7 -- # uname -s 00:25:51.274 10:26:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.274 10:26:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.274 10:26:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.274 10:26:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.274 10:26:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.274 10:26:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.274 10:26:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.274 10:26:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.274 10:26:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.274 10:26:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.533 10:26:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:25:51.533 10:26:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:25:51.533 10:26:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.533 10:26:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.533 10:26:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:51.533 10:26:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:51.533 10:26:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.533 10:26:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.533 10:26:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.533 10:26:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.533 10:26:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.533 10:26:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.533 10:26:10 -- paths/export.sh@5 -- # export PATH 00:25:51.533 10:26:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.533 10:26:10 -- nvmf/common.sh@46 -- # : 0 00:25:51.533 10:26:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:51.533 10:26:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:51.533 10:26:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:51.534 10:26:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.534 10:26:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.534 10:26:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:51.534 10:26:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:51.534 10:26:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:51.534 10:26:10 -- target/dif.sh@15 -- # NULL_META=16 00:25:51.534 10:26:10 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:51.534 10:26:10 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:51.534 10:26:10 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:51.534 10:26:10 -- target/dif.sh@135 -- # nvmftestinit 00:25:51.534 10:26:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:51.534 10:26:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.534 10:26:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:51.534 10:26:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:51.534 10:26:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:51.534 10:26:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.534 10:26:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:51.534 10:26:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.534 10:26:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:51.534 10:26:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:51.534 10:26:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:51.534 10:26:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:51.534 10:26:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:51.534 10:26:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:51.534 10:26:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.534 10:26:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.534 10:26:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:51.534 10:26:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:51.534 10:26:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:51.534 10:26:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:51.534 10:26:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:51.534 10:26:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.534 10:26:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:51.534 10:26:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:51.534 10:26:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:51.534 10:26:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:51.534 10:26:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:51.534 10:26:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:51.534 Cannot find device "nvmf_tgt_br" 00:25:51.534 10:26:10 -- nvmf/common.sh@154 -- # true 00:25:51.534 10:26:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:51.534 Cannot find device "nvmf_tgt_br2" 00:25:51.534 10:26:10 -- nvmf/common.sh@155 -- # true 00:25:51.534 10:26:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:51.534 10:26:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:51.534 Cannot find device "nvmf_tgt_br" 00:25:51.534 10:26:10 -- nvmf/common.sh@157 -- # true 00:25:51.534 10:26:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:51.534 Cannot find device "nvmf_tgt_br2" 00:25:51.534 10:26:10 -- nvmf/common.sh@158 -- # true 00:25:51.534 10:26:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:51.534 10:26:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:51.534 10:26:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:51.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:51.534 10:26:10 -- nvmf/common.sh@161 -- # true 00:25:51.534 10:26:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:51.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:51.534 10:26:10 -- nvmf/common.sh@162 -- # true 00:25:51.534 10:26:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:51.534 10:26:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:51.534 10:26:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:51.534 10:26:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:51.534 10:26:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:51.534 10:26:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:51.534 10:26:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:51.534 10:26:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:51.534 10:26:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:51.534 10:26:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:51.534 10:26:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:51.534 10:26:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:51.534 10:26:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:51.534 10:26:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:51.792 10:26:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:51.792 10:26:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:51.792 10:26:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:51.792 10:26:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:51.792 10:26:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:51.792 10:26:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:51.792 10:26:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:51.792 10:26:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:51.792 10:26:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:51.792 10:26:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:51.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:51.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:25:51.793 00:25:51.793 --- 10.0.0.2 ping statistics --- 00:25:51.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.793 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:25:51.793 10:26:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:51.793 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:51.793 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:25:51.793 00:25:51.793 --- 10.0.0.3 ping statistics --- 00:25:51.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.793 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:25:51.793 10:26:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:51.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:51.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:25:51.793 00:25:51.793 --- 10.0.0.1 ping statistics --- 00:25:51.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.793 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:25:51.793 10:26:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:51.793 10:26:11 -- nvmf/common.sh@421 -- # return 0 00:25:51.793 10:26:11 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:51.793 10:26:11 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:52.051 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:52.051 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:52.051 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:52.051 10:26:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.051 10:26:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:52.051 10:26:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:52.051 10:26:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.052 10:26:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:52.052 10:26:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:52.052 10:26:11 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:52.052 10:26:11 -- target/dif.sh@137 -- # nvmfappstart 00:25:52.052 10:26:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:52.052 10:26:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:52.052 10:26:11 -- common/autotest_common.sh@10 -- # set +x 00:25:52.052 10:26:11 -- nvmf/common.sh@469 -- # nvmfpid=101335 00:25:52.052 10:26:11 -- nvmf/common.sh@470 -- # waitforlisten 101335 00:25:52.052 10:26:11 -- common/autotest_common.sh@829 -- # '[' -z 101335 ']' 00:25:52.052 10:26:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.052 10:26:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:52.052 10:26:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:52.052 10:26:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.052 10:26:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:52.052 10:26:11 -- common/autotest_common.sh@10 -- # set +x 00:25:52.311 [2024-11-19 10:26:11.624034] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:52.311 [2024-11-19 10:26:11.624128] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.311 [2024-11-19 10:26:11.769577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.311 [2024-11-19 10:26:11.804286] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:52.311 [2024-11-19 10:26:11.804429] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.311 [2024-11-19 10:26:11.804441] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.311 [2024-11-19 10:26:11.804450] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.311 [2024-11-19 10:26:11.804479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.246 10:26:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:53.246 10:26:12 -- common/autotest_common.sh@862 -- # return 0 00:25:53.246 10:26:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:53.246 10:26:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:53.246 10:26:12 -- common/autotest_common.sh@10 -- # set +x 00:25:53.246 10:26:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.246 10:26:12 -- target/dif.sh@139 -- # create_transport 00:25:53.246 10:26:12 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:53.246 10:26:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.246 10:26:12 -- common/autotest_common.sh@10 -- # set +x 00:25:53.246 [2024-11-19 10:26:12.689748] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.246 10:26:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.246 10:26:12 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:53.246 10:26:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:53.246 10:26:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:53.246 10:26:12 -- common/autotest_common.sh@10 -- # set +x 00:25:53.246 ************************************ 00:25:53.246 START TEST fio_dif_1_default 00:25:53.246 ************************************ 00:25:53.246 10:26:12 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:25:53.246 10:26:12 -- target/dif.sh@86 -- # create_subsystems 0 00:25:53.246 10:26:12 -- target/dif.sh@28 -- # local sub 00:25:53.246 10:26:12 -- target/dif.sh@30 -- # for sub in "$@" 00:25:53.246 10:26:12 -- target/dif.sh@31 -- # create_subsystem 0 00:25:53.246 10:26:12 -- target/dif.sh@18 -- # local sub_id=0 00:25:53.246 10:26:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:53.246 10:26:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.246 10:26:12 -- common/autotest_common.sh@10 -- # set +x 00:25:53.246 bdev_null0 00:25:53.246 10:26:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.246 10:26:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:53.246 10:26:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.246 10:26:12 -- common/autotest_common.sh@10 -- # set +x 00:25:53.246 10:26:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.246 10:26:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:53.246 10:26:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.246 10:26:12 -- common/autotest_common.sh@10 -- # set +x 00:25:53.246 10:26:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.246 10:26:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:53.246 10:26:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.246 10:26:12 -- common/autotest_common.sh@10 -- # set +x 00:25:53.246 [2024-11-19 10:26:12.737899] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.246 10:26:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.246 10:26:12 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:53.246 10:26:12 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:53.246 10:26:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:53.246 10:26:12 -- nvmf/common.sh@520 -- # config=() 00:25:53.246 10:26:12 -- nvmf/common.sh@520 -- # local subsystem config 00:25:53.246 10:26:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:53.246 10:26:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:53.246 { 00:25:53.246 "params": { 00:25:53.246 "name": "Nvme$subsystem", 00:25:53.246 "trtype": "$TEST_TRANSPORT", 00:25:53.246 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.246 "adrfam": "ipv4", 00:25:53.246 "trsvcid": "$NVMF_PORT", 00:25:53.246 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.246 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.246 "hdgst": ${hdgst:-false}, 00:25:53.246 "ddgst": ${ddgst:-false} 00:25:53.246 }, 00:25:53.246 "method": "bdev_nvme_attach_controller" 00:25:53.246 } 00:25:53.246 EOF 00:25:53.246 )") 00:25:53.246 10:26:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:53.246 10:26:12 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:53.246 10:26:12 -- target/dif.sh@82 -- # gen_fio_conf 00:25:53.246 10:26:12 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:25:53.246 10:26:12 -- target/dif.sh@54 -- # local file 00:25:53.246 10:26:12 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:53.246 10:26:12 -- target/dif.sh@56 -- # cat 00:25:53.246 10:26:12 -- common/autotest_common.sh@1328 -- # local sanitizers 00:25:53.246 10:26:12 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:53.246 10:26:12 -- common/autotest_common.sh@1330 -- # shift 00:25:53.246 10:26:12 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:25:53.246 10:26:12 -- nvmf/common.sh@542 -- # cat 00:25:53.246 10:26:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:53.246 10:26:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:53.246 10:26:12 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:53.246 10:26:12 -- common/autotest_common.sh@1334 -- # grep libasan 00:25:53.246 10:26:12 -- target/dif.sh@72 -- # (( file <= files )) 00:25:53.246 10:26:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:53.246 10:26:12 -- nvmf/common.sh@544 -- # jq . 00:25:53.247 10:26:12 -- nvmf/common.sh@545 -- # IFS=, 00:25:53.247 10:26:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:53.247 "params": { 00:25:53.247 "name": "Nvme0", 00:25:53.247 "trtype": "tcp", 00:25:53.247 "traddr": "10.0.0.2", 00:25:53.247 "adrfam": "ipv4", 00:25:53.247 "trsvcid": "4420", 00:25:53.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:53.247 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:53.247 "hdgst": false, 00:25:53.247 "ddgst": false 00:25:53.247 }, 00:25:53.247 "method": "bdev_nvme_attach_controller" 00:25:53.247 }' 00:25:53.247 10:26:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:53.247 10:26:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:53.247 10:26:12 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:25:53.247 10:26:12 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:53.247 10:26:12 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:25:53.247 10:26:12 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:25:53.505 10:26:12 -- common/autotest_common.sh@1334 -- # asan_lib= 00:25:53.505 10:26:12 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:25:53.505 10:26:12 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:53.505 10:26:12 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:53.505 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:53.505 fio-3.35 00:25:53.505 Starting 1 thread 00:25:53.764 [2024-11-19 10:26:13.306760] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:53.764 [2024-11-19 10:26:13.306847] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:05.967 00:26:05.967 filename0: (groupid=0, jobs=1): err= 0: pid=101419: Tue Nov 19 10:26:23 2024 00:26:05.967 read: IOPS=4730, BW=18.5MiB/s (19.4MB/s)(185MiB/10001msec) 00:26:05.967 slat (nsec): min=5210, max=98451, avg=8761.28, stdev=2490.20 00:26:05.967 clat (usec): min=389, max=42484, avg=819.87, stdev=3707.48 00:26:05.967 lat (usec): min=396, max=42494, avg=828.64, stdev=3707.52 00:26:05.967 clat percentiles (usec): 00:26:05.967 | 1.00th=[ 433], 5.00th=[ 445], 10.00th=[ 453], 20.00th=[ 461], 00:26:05.967 | 30.00th=[ 465], 40.00th=[ 469], 50.00th=[ 474], 60.00th=[ 482], 00:26:05.967 | 70.00th=[ 486], 80.00th=[ 494], 90.00th=[ 510], 95.00th=[ 529], 00:26:05.967 | 99.00th=[ 586], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681], 00:26:05.967 | 99.99th=[41681] 00:26:05.967 bw ( KiB/s): min=11360, max=24448, per=98.76%, avg=18689.68, stdev=4484.79, samples=19 00:26:05.967 iops : min= 2840, max= 6112, avg=4672.42, stdev=1121.20, samples=19 00:26:05.967 lat (usec) : 500=85.49%, 750=13.66% 00:26:05.967 lat (msec) : 4=0.01%, 50=0.85% 00:26:05.967 cpu : usr=88.70%, sys=9.40%, ctx=56, majf=0, minf=8 00:26:05.967 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.967 issued rwts: total=47312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.967 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:05.967 00:26:05.967 Run status group 0 (all jobs): 00:26:05.967 READ: bw=18.5MiB/s (19.4MB/s), 18.5MiB/s-18.5MiB/s (19.4MB/s-19.4MB/s), io=185MiB (194MB), run=10001-10001msec 00:26:05.967 10:26:23 -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:05.967 10:26:23 -- target/dif.sh@43 -- # local sub 00:26:05.967 10:26:23 -- target/dif.sh@45 -- # for sub in "$@" 00:26:05.967 10:26:23 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:05.967 10:26:23 -- target/dif.sh@36 -- # local sub_id=0 00:26:05.967 10:26:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:05.967 10:26:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.967 10:26:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.967 10:26:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.967 10:26:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:05.967 10:26:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.967 10:26:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.967 10:26:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.967 00:26:05.967 real 0m10.882s 00:26:05.967 user 0m9.431s 00:26:05.967 sys 0m1.165s 00:26:05.967 10:26:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:05.967 ************************************ 00:26:05.967 END TEST fio_dif_1_default 00:26:05.967 10:26:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.967 ************************************ 00:26:05.967 10:26:23 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:05.967 10:26:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:05.967 10:26:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:05.967 10:26:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.968 ************************************ 00:26:05.968 START TEST fio_dif_1_multi_subsystems 00:26:05.968 ************************************ 00:26:05.968 10:26:23 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:26:05.968 10:26:23 -- target/dif.sh@92 -- # local files=1 00:26:05.968 10:26:23 -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:05.968 10:26:23 -- target/dif.sh@28 -- # local sub 00:26:05.968 10:26:23 -- target/dif.sh@30 -- # for sub in "$@" 00:26:05.968 10:26:23 -- target/dif.sh@31 -- # create_subsystem 0 00:26:05.968 10:26:23 -- target/dif.sh@18 -- # local sub_id=0 00:26:05.968 10:26:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:05.968 10:26:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.968 10:26:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.968 bdev_null0 00:26:05.968 10:26:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.968 10:26:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:05.968 10:26:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.968 10:26:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.968 10:26:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.968 10:26:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:05.968 10:26:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.968 10:26:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.968 10:26:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.968 10:26:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:05.968 10:26:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.968 10:26:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.968 [2024-11-19 10:26:23.669040] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.968 10:26:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.968 10:26:23 -- target/dif.sh@30 -- # for sub in "$@" 00:26:05.968 10:26:23 -- target/dif.sh@31 -- # create_subsystem 1 00:26:05.968 10:26:23 -- target/dif.sh@18 -- # local sub_id=1 00:26:05.968 10:26:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:05.968 10:26:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.968 10:26:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.968 bdev_null1 00:26:05.968 10:26:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.968 10:26:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:05.968 10:26:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.968 10:26:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.968 10:26:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.968 10:26:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:05.968 10:26:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.968 10:26:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.968 10:26:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.968 10:26:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:05.968 10:26:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.968 10:26:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.968 10:26:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.968 10:26:23 -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:05.968 10:26:23 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:05.968 10:26:23 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:05.968 10:26:23 -- nvmf/common.sh@520 -- # config=() 00:26:05.968 10:26:23 -- nvmf/common.sh@520 -- # local subsystem config 00:26:05.968 10:26:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:05.968 10:26:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:05.968 { 00:26:05.968 "params": { 00:26:05.968 "name": "Nvme$subsystem", 00:26:05.968 "trtype": "$TEST_TRANSPORT", 00:26:05.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.968 "adrfam": "ipv4", 00:26:05.968 "trsvcid": "$NVMF_PORT", 00:26:05.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.968 "hdgst": ${hdgst:-false}, 00:26:05.968 "ddgst": ${ddgst:-false} 00:26:05.968 }, 00:26:05.968 "method": "bdev_nvme_attach_controller" 00:26:05.968 } 00:26:05.968 EOF 00:26:05.968 )") 00:26:05.968 10:26:23 -- target/dif.sh@82 -- # gen_fio_conf 00:26:05.968 10:26:23 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:05.968 10:26:23 -- target/dif.sh@54 -- # local file 00:26:05.968 10:26:23 -- target/dif.sh@56 -- # cat 00:26:05.968 10:26:23 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:05.968 10:26:23 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:05.968 10:26:23 -- nvmf/common.sh@542 -- # cat 00:26:05.968 10:26:23 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:05.968 10:26:23 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:05.968 10:26:23 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:05.968 10:26:23 -- common/autotest_common.sh@1330 -- # shift 00:26:05.968 10:26:23 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:05.968 10:26:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:05.968 10:26:23 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:05.968 10:26:23 -- target/dif.sh@72 -- # (( file <= files )) 00:26:05.968 10:26:23 -- target/dif.sh@73 -- # cat 00:26:05.968 10:26:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:05.968 10:26:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:05.968 { 00:26:05.968 "params": { 00:26:05.968 "name": "Nvme$subsystem", 00:26:05.968 "trtype": "$TEST_TRANSPORT", 00:26:05.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.968 "adrfam": "ipv4", 00:26:05.968 "trsvcid": "$NVMF_PORT", 00:26:05.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.968 "hdgst": ${hdgst:-false}, 00:26:05.968 "ddgst": ${ddgst:-false} 00:26:05.968 }, 00:26:05.968 "method": "bdev_nvme_attach_controller" 00:26:05.968 } 00:26:05.968 EOF 00:26:05.968 )") 00:26:05.968 10:26:23 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:05.968 10:26:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:05.968 10:26:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:05.968 10:26:23 -- nvmf/common.sh@542 -- # cat 00:26:05.968 10:26:23 -- target/dif.sh@72 -- # (( file++ )) 00:26:05.968 10:26:23 -- target/dif.sh@72 -- # (( file <= files )) 00:26:05.968 10:26:23 -- nvmf/common.sh@544 -- # jq . 00:26:05.968 10:26:23 -- nvmf/common.sh@545 -- # IFS=, 00:26:05.968 10:26:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:05.968 "params": { 00:26:05.968 "name": "Nvme0", 00:26:05.968 "trtype": "tcp", 00:26:05.968 "traddr": "10.0.0.2", 00:26:05.968 "adrfam": "ipv4", 00:26:05.968 "trsvcid": "4420", 00:26:05.968 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:05.968 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:05.968 "hdgst": false, 00:26:05.968 "ddgst": false 00:26:05.968 }, 00:26:05.968 "method": "bdev_nvme_attach_controller" 00:26:05.968 },{ 00:26:05.968 "params": { 00:26:05.968 "name": "Nvme1", 00:26:05.968 "trtype": "tcp", 00:26:05.968 "traddr": "10.0.0.2", 00:26:05.968 "adrfam": "ipv4", 00:26:05.968 "trsvcid": "4420", 00:26:05.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:05.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:05.968 "hdgst": false, 00:26:05.968 "ddgst": false 00:26:05.968 }, 00:26:05.968 "method": "bdev_nvme_attach_controller" 00:26:05.968 }' 00:26:05.968 10:26:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:05.968 10:26:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:05.968 10:26:23 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:05.968 10:26:23 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:05.968 10:26:23 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:05.968 10:26:23 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:05.968 10:26:23 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:05.968 10:26:23 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:05.968 10:26:23 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:05.968 10:26:23 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:05.968 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:05.968 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:05.968 fio-3.35 00:26:05.968 Starting 2 threads 00:26:05.968 [2024-11-19 10:26:24.371138] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:05.968 [2024-11-19 10:26:24.371202] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:15.942 00:26:15.942 filename0: (groupid=0, jobs=1): err= 0: pid=101579: Tue Nov 19 10:26:34 2024 00:26:15.942 read: IOPS=216, BW=868KiB/s (888kB/s)(8704KiB/10033msec) 00:26:15.942 slat (nsec): min=7071, max=32677, avg=9664.05, stdev=3477.70 00:26:15.942 clat (usec): min=433, max=42517, avg=18412.20, stdev=20156.19 00:26:15.942 lat (usec): min=440, max=42528, avg=18421.87, stdev=20156.45 00:26:15.942 clat percentiles (usec): 00:26:15.942 | 1.00th=[ 453], 5.00th=[ 465], 10.00th=[ 474], 20.00th=[ 482], 00:26:15.942 | 30.00th=[ 494], 40.00th=[ 502], 50.00th=[ 523], 60.00th=[40633], 00:26:15.942 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:26:15.942 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:26:15.942 | 99.99th=[42730] 00:26:15.942 bw ( KiB/s): min= 577, max= 1248, per=52.04%, avg=868.85, stdev=193.12, samples=20 00:26:15.942 iops : min= 144, max= 312, avg=217.20, stdev=48.30, samples=20 00:26:15.942 lat (usec) : 500=37.73%, 750=16.31%, 1000=1.65% 00:26:15.942 lat (msec) : 4=0.18%, 50=44.12% 00:26:15.942 cpu : usr=95.96%, sys=3.60%, ctx=58, majf=0, minf=0 00:26:15.942 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:15.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.942 issued rwts: total=2176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.942 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:15.942 filename1: (groupid=0, jobs=1): err= 0: pid=101580: Tue Nov 19 10:26:34 2024 00:26:15.942 read: IOPS=200, BW=802KiB/s (822kB/s)(8032KiB/10010msec) 00:26:15.942 slat (nsec): min=4258, max=39590, avg=9860.05, stdev=3802.44 00:26:15.942 clat (usec): min=445, max=42518, avg=19909.15, stdev=20259.47 00:26:15.942 lat (usec): min=452, max=42529, avg=19919.01, stdev=20259.46 00:26:15.942 clat percentiles (usec): 00:26:15.942 | 1.00th=[ 453], 5.00th=[ 465], 10.00th=[ 474], 20.00th=[ 486], 00:26:15.942 | 30.00th=[ 498], 40.00th=[ 510], 50.00th=[ 799], 60.00th=[41157], 00:26:15.942 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:26:15.942 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:26:15.942 | 99.99th=[42730] 00:26:15.942 bw ( KiB/s): min= 576, max= 1056, per=48.02%, avg=801.60, stdev=135.16, samples=20 00:26:15.942 iops : min= 144, max= 264, avg=200.40, stdev=33.79, samples=20 00:26:15.942 lat (usec) : 500=32.67%, 750=16.14%, 1000=3.19% 00:26:15.942 lat (msec) : 4=0.20%, 50=47.81% 00:26:15.942 cpu : usr=95.91%, sys=3.66%, ctx=7, majf=0, minf=9 00:26:15.942 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:15.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:15.942 issued rwts: total=2008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:15.942 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:15.942 00:26:15.942 Run status group 0 (all jobs): 00:26:15.942 READ: bw=1668KiB/s (1708kB/s), 802KiB/s-868KiB/s (822kB/s-888kB/s), io=16.3MiB (17.1MB), run=10010-10033msec 00:26:15.942 10:26:34 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:15.942 10:26:34 -- target/dif.sh@43 -- # local sub 00:26:15.942 10:26:34 -- target/dif.sh@45 -- # for sub in "$@" 00:26:15.942 10:26:34 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:15.942 10:26:34 -- target/dif.sh@36 -- # local sub_id=0 00:26:15.942 10:26:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:15.942 10:26:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.942 10:26:34 -- common/autotest_common.sh@10 -- # set +x 00:26:15.942 10:26:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.942 10:26:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:15.942 10:26:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.942 10:26:34 -- common/autotest_common.sh@10 -- # set +x 00:26:15.942 10:26:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.942 10:26:34 -- target/dif.sh@45 -- # for sub in "$@" 00:26:15.942 10:26:34 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:15.942 10:26:34 -- target/dif.sh@36 -- # local sub_id=1 00:26:15.942 10:26:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:15.942 10:26:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.942 10:26:34 -- common/autotest_common.sh@10 -- # set +x 00:26:15.942 10:26:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.942 10:26:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:15.942 10:26:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.942 10:26:34 -- common/autotest_common.sh@10 -- # set +x 00:26:15.942 10:26:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.942 00:26:15.942 real 0m11.061s 00:26:15.942 user 0m19.945s 00:26:15.942 sys 0m0.970s 00:26:15.942 10:26:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:15.942 ************************************ 00:26:15.942 END TEST fio_dif_1_multi_subsystems 00:26:15.942 ************************************ 00:26:15.942 10:26:34 -- common/autotest_common.sh@10 -- # set +x 00:26:15.942 10:26:34 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:15.942 10:26:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:15.942 10:26:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:15.942 10:26:34 -- common/autotest_common.sh@10 -- # set +x 00:26:15.942 ************************************ 00:26:15.942 START TEST fio_dif_rand_params 00:26:15.942 ************************************ 00:26:15.942 10:26:34 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:26:15.942 10:26:34 -- target/dif.sh@100 -- # local NULL_DIF 00:26:15.942 10:26:34 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:15.942 10:26:34 -- target/dif.sh@103 -- # NULL_DIF=3 00:26:15.942 10:26:34 -- target/dif.sh@103 -- # bs=128k 00:26:15.942 10:26:34 -- target/dif.sh@103 -- # numjobs=3 00:26:15.942 10:26:34 -- target/dif.sh@103 -- # iodepth=3 00:26:15.942 10:26:34 -- target/dif.sh@103 -- # runtime=5 00:26:15.942 10:26:34 -- target/dif.sh@105 -- # create_subsystems 0 00:26:15.942 10:26:34 -- target/dif.sh@28 -- # local sub 00:26:15.942 10:26:34 -- target/dif.sh@30 -- # for sub in "$@" 00:26:15.942 10:26:34 -- target/dif.sh@31 -- # create_subsystem 0 00:26:15.942 10:26:34 -- target/dif.sh@18 -- # local sub_id=0 00:26:15.942 10:26:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:15.942 10:26:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.942 10:26:34 -- common/autotest_common.sh@10 -- # set +x 00:26:15.942 bdev_null0 00:26:15.942 10:26:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.942 10:26:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:15.942 10:26:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.942 10:26:34 -- common/autotest_common.sh@10 -- # set +x 00:26:15.942 10:26:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.942 10:26:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:15.942 10:26:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.942 10:26:34 -- common/autotest_common.sh@10 -- # set +x 00:26:15.942 10:26:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.942 10:26:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:15.942 10:26:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.942 10:26:34 -- common/autotest_common.sh@10 -- # set +x 00:26:15.942 [2024-11-19 10:26:34.787347] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.942 10:26:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.942 10:26:34 -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:15.942 10:26:34 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:15.942 10:26:34 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:15.942 10:26:34 -- nvmf/common.sh@520 -- # config=() 00:26:15.942 10:26:34 -- nvmf/common.sh@520 -- # local subsystem config 00:26:15.942 10:26:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:15.942 10:26:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:15.942 { 00:26:15.942 "params": { 00:26:15.942 "name": "Nvme$subsystem", 00:26:15.942 "trtype": "$TEST_TRANSPORT", 00:26:15.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:15.942 "adrfam": "ipv4", 00:26:15.942 "trsvcid": "$NVMF_PORT", 00:26:15.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:15.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:15.942 "hdgst": ${hdgst:-false}, 00:26:15.942 "ddgst": ${ddgst:-false} 00:26:15.942 }, 00:26:15.942 "method": "bdev_nvme_attach_controller" 00:26:15.942 } 00:26:15.942 EOF 00:26:15.942 )") 00:26:15.942 10:26:34 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.942 10:26:34 -- target/dif.sh@82 -- # gen_fio_conf 00:26:15.942 10:26:34 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.942 10:26:34 -- target/dif.sh@54 -- # local file 00:26:15.942 10:26:34 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:15.942 10:26:34 -- target/dif.sh@56 -- # cat 00:26:15.942 10:26:34 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:15.942 10:26:34 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:15.942 10:26:34 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:15.942 10:26:34 -- common/autotest_common.sh@1330 -- # shift 00:26:15.942 10:26:34 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:15.942 10:26:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:15.942 10:26:34 -- nvmf/common.sh@542 -- # cat 00:26:15.942 10:26:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:15.942 10:26:34 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:15.943 10:26:34 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:15.943 10:26:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:15.943 10:26:34 -- nvmf/common.sh@544 -- # jq . 00:26:15.943 10:26:34 -- target/dif.sh@72 -- # (( file <= files )) 00:26:15.943 10:26:34 -- nvmf/common.sh@545 -- # IFS=, 00:26:15.943 10:26:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:15.943 "params": { 00:26:15.943 "name": "Nvme0", 00:26:15.943 "trtype": "tcp", 00:26:15.943 "traddr": "10.0.0.2", 00:26:15.943 "adrfam": "ipv4", 00:26:15.943 "trsvcid": "4420", 00:26:15.943 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:15.943 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:15.943 "hdgst": false, 00:26:15.943 "ddgst": false 00:26:15.943 }, 00:26:15.943 "method": "bdev_nvme_attach_controller" 00:26:15.943 }' 00:26:15.943 10:26:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:15.943 10:26:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:15.943 10:26:34 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:15.943 10:26:34 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:15.943 10:26:34 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:15.943 10:26:34 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:15.943 10:26:34 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:15.943 10:26:34 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:15.943 10:26:34 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:15.943 10:26:34 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:15.943 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:15.943 ... 00:26:15.943 fio-3.35 00:26:15.943 Starting 3 threads 00:26:15.943 [2024-11-19 10:26:35.352650] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:15.943 [2024-11-19 10:26:35.352721] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:21.214 00:26:21.214 filename0: (groupid=0, jobs=1): err= 0: pid=101735: Tue Nov 19 10:26:40 2024 00:26:21.214 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(124MiB/5007msec) 00:26:21.214 slat (nsec): min=7286, max=29664, avg=10559.93, stdev=3134.73 00:26:21.214 clat (usec): min=7381, max=22161, avg=15099.84, stdev=1959.32 00:26:21.214 lat (usec): min=7389, max=22172, avg=15110.40, stdev=1959.48 00:26:21.214 clat percentiles (usec): 00:26:21.214 | 1.00th=[ 8979], 5.00th=[10028], 10.00th=[13173], 20.00th=[14484], 00:26:21.214 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15533], 60.00th=[15664], 00:26:21.214 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16712], 95.00th=[17171], 00:26:21.214 | 99.00th=[20055], 99.50th=[20055], 99.90th=[22152], 99.95th=[22152], 00:26:21.214 | 99.99th=[22152] 00:26:21.214 bw ( KiB/s): min=23040, max=27648, per=28.04%, avg=25344.00, stdev=1448.15, samples=10 00:26:21.214 iops : min= 180, max= 216, avg=198.00, stdev=11.31, samples=10 00:26:21.214 lat (msec) : 10=5.14%, 20=93.76%, 50=1.11% 00:26:21.214 cpu : usr=93.71%, sys=5.01%, ctx=5, majf=0, minf=11 00:26:21.214 IO depths : 1=28.4%, 2=71.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.214 issued rwts: total=993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.214 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:21.214 filename0: (groupid=0, jobs=1): err= 0: pid=101736: Tue Nov 19 10:26:40 2024 00:26:21.214 read: IOPS=273, BW=34.2MiB/s (35.8MB/s)(171MiB/5007msec) 00:26:21.214 slat (nsec): min=7970, max=58565, avg=12378.18, stdev=2805.99 00:26:21.214 clat (usec): min=5332, max=51494, avg=10954.89, stdev=2920.67 00:26:21.214 lat (usec): min=5343, max=51507, avg=10967.27, stdev=2921.01 00:26:21.214 clat percentiles (usec): 00:26:21.214 | 1.00th=[ 6783], 5.00th=[ 7963], 10.00th=[ 9503], 20.00th=[10028], 00:26:21.214 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:26:21.214 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12649], 00:26:21.214 | 99.00th=[15270], 99.50th=[17695], 99.90th=[51643], 99.95th=[51643], 00:26:21.214 | 99.99th=[51643] 00:26:21.214 bw ( KiB/s): min=29184, max=37888, per=38.70%, avg=34969.60, stdev=2526.21, samples=10 00:26:21.214 iops : min= 228, max= 296, avg=273.20, stdev=19.74, samples=10 00:26:21.214 lat (msec) : 10=18.12%, 20=81.45%, 50=0.22%, 100=0.22% 00:26:21.214 cpu : usr=92.77%, sys=5.77%, ctx=5, majf=0, minf=0 00:26:21.214 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.214 issued rwts: total=1369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.214 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:21.214 filename0: (groupid=0, jobs=1): err= 0: pid=101737: Tue Nov 19 10:26:40 2024 00:26:21.214 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(147MiB/5006msec) 00:26:21.214 slat (usec): min=6, max=177, avg=12.36, stdev= 6.68 00:26:21.214 clat (usec): min=6359, max=54068, avg=12782.38, stdev=4732.83 00:26:21.214 lat (usec): min=6375, max=54079, avg=12794.74, stdev=4732.82 00:26:21.214 clat percentiles (usec): 00:26:21.214 | 1.00th=[ 7701], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:26:21.214 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:26:21.214 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13698], 95.00th=[14484], 00:26:21.214 | 99.00th=[52691], 99.50th=[52691], 99.90th=[53216], 99.95th=[54264], 00:26:21.214 | 99.99th=[54264] 00:26:21.214 bw ( KiB/s): min=25344, max=33090, per=33.15%, avg=29958.60, stdev=2299.65, samples=10 00:26:21.214 iops : min= 198, max= 258, avg=234.00, stdev=17.89, samples=10 00:26:21.214 lat (msec) : 10=3.15%, 20=95.57%, 100=1.28% 00:26:21.214 cpu : usr=92.53%, sys=5.87%, ctx=17, majf=0, minf=9 00:26:21.214 IO depths : 1=4.6%, 2=95.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.214 issued rwts: total=1173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.214 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:21.214 00:26:21.214 Run status group 0 (all jobs): 00:26:21.214 READ: bw=88.3MiB/s (92.5MB/s), 24.8MiB/s-34.2MiB/s (26.0MB/s-35.8MB/s), io=442MiB (463MB), run=5006-5007msec 00:26:21.214 10:26:40 -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:21.215 10:26:40 -- target/dif.sh@43 -- # local sub 00:26:21.215 10:26:40 -- target/dif.sh@45 -- # for sub in "$@" 00:26:21.215 10:26:40 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:21.215 10:26:40 -- target/dif.sh@36 -- # local sub_id=0 00:26:21.215 10:26:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:21.215 10:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.215 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.215 10:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.215 10:26:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:21.215 10:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.215 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.215 10:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.215 10:26:40 -- target/dif.sh@109 -- # NULL_DIF=2 00:26:21.215 10:26:40 -- target/dif.sh@109 -- # bs=4k 00:26:21.215 10:26:40 -- target/dif.sh@109 -- # numjobs=8 00:26:21.215 10:26:40 -- target/dif.sh@109 -- # iodepth=16 00:26:21.215 10:26:40 -- target/dif.sh@109 -- # runtime= 00:26:21.215 10:26:40 -- target/dif.sh@109 -- # files=2 00:26:21.215 10:26:40 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:21.215 10:26:40 -- target/dif.sh@28 -- # local sub 00:26:21.215 10:26:40 -- target/dif.sh@30 -- # for sub in "$@" 00:26:21.215 10:26:40 -- target/dif.sh@31 -- # create_subsystem 0 00:26:21.215 10:26:40 -- target/dif.sh@18 -- # local sub_id=0 00:26:21.215 10:26:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:21.215 10:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.215 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.215 bdev_null0 00:26:21.215 10:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.215 10:26:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:21.215 10:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.215 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.215 10:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.215 10:26:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:21.215 10:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.215 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.215 10:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.215 10:26:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:21.215 10:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.215 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.215 [2024-11-19 10:26:40.676912] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.215 10:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.215 10:26:40 -- target/dif.sh@30 -- # for sub in "$@" 00:26:21.215 10:26:40 -- target/dif.sh@31 -- # create_subsystem 1 00:26:21.215 10:26:40 -- target/dif.sh@18 -- # local sub_id=1 00:26:21.215 10:26:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:21.215 10:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.215 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.215 bdev_null1 00:26:21.215 10:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.215 10:26:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:21.215 10:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.215 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.215 10:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.215 10:26:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:21.215 10:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.215 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.215 10:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.215 10:26:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:21.215 10:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.215 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.215 10:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.215 10:26:40 -- target/dif.sh@30 -- # for sub in "$@" 00:26:21.215 10:26:40 -- target/dif.sh@31 -- # create_subsystem 2 00:26:21.215 10:26:40 -- target/dif.sh@18 -- # local sub_id=2 00:26:21.215 10:26:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:21.215 10:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.215 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.215 bdev_null2 00:26:21.215 10:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.215 10:26:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:21.215 10:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.215 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.215 10:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.215 10:26:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:21.215 10:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.215 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.215 10:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.215 10:26:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:21.215 10:26:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.215 10:26:40 -- common/autotest_common.sh@10 -- # set +x 00:26:21.215 10:26:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.215 10:26:40 -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:21.215 10:26:40 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:21.215 10:26:40 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:21.215 10:26:40 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:21.215 10:26:40 -- target/dif.sh@82 -- # gen_fio_conf 00:26:21.215 10:26:40 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:21.215 10:26:40 -- target/dif.sh@54 -- # local file 00:26:21.215 10:26:40 -- target/dif.sh@56 -- # cat 00:26:21.215 10:26:40 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:21.215 10:26:40 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:21.215 10:26:40 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:21.215 10:26:40 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:21.215 10:26:40 -- common/autotest_common.sh@1330 -- # shift 00:26:21.215 10:26:40 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:21.216 10:26:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:21.216 10:26:40 -- nvmf/common.sh@520 -- # config=() 00:26:21.216 10:26:40 -- nvmf/common.sh@520 -- # local subsystem config 00:26:21.216 10:26:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:21.216 10:26:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:21.216 { 00:26:21.216 "params": { 00:26:21.216 "name": "Nvme$subsystem", 00:26:21.216 "trtype": "$TEST_TRANSPORT", 00:26:21.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.216 "adrfam": "ipv4", 00:26:21.216 "trsvcid": "$NVMF_PORT", 00:26:21.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.216 "hdgst": ${hdgst:-false}, 00:26:21.216 "ddgst": ${ddgst:-false} 00:26:21.216 }, 00:26:21.216 "method": "bdev_nvme_attach_controller" 00:26:21.216 } 00:26:21.216 EOF 00:26:21.216 )") 00:26:21.216 10:26:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:21.216 10:26:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:21.216 10:26:40 -- target/dif.sh@72 -- # (( file <= files )) 00:26:21.216 10:26:40 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:21.216 10:26:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:21.216 10:26:40 -- target/dif.sh@73 -- # cat 00:26:21.216 10:26:40 -- nvmf/common.sh@542 -- # cat 00:26:21.475 10:26:40 -- target/dif.sh@72 -- # (( file++ )) 00:26:21.475 10:26:40 -- target/dif.sh@72 -- # (( file <= files )) 00:26:21.475 10:26:40 -- target/dif.sh@73 -- # cat 00:26:21.475 10:26:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:21.475 10:26:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:21.475 { 00:26:21.475 "params": { 00:26:21.475 "name": "Nvme$subsystem", 00:26:21.475 "trtype": "$TEST_TRANSPORT", 00:26:21.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.475 "adrfam": "ipv4", 00:26:21.475 "trsvcid": "$NVMF_PORT", 00:26:21.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.475 "hdgst": ${hdgst:-false}, 00:26:21.475 "ddgst": ${ddgst:-false} 00:26:21.475 }, 00:26:21.475 "method": "bdev_nvme_attach_controller" 00:26:21.475 } 00:26:21.475 EOF 00:26:21.475 )") 00:26:21.475 10:26:40 -- nvmf/common.sh@542 -- # cat 00:26:21.475 10:26:40 -- target/dif.sh@72 -- # (( file++ )) 00:26:21.475 10:26:40 -- target/dif.sh@72 -- # (( file <= files )) 00:26:21.475 10:26:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:21.475 10:26:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:21.475 { 00:26:21.475 "params": { 00:26:21.475 "name": "Nvme$subsystem", 00:26:21.475 "trtype": "$TEST_TRANSPORT", 00:26:21.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.475 "adrfam": "ipv4", 00:26:21.475 "trsvcid": "$NVMF_PORT", 00:26:21.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.475 "hdgst": ${hdgst:-false}, 00:26:21.475 "ddgst": ${ddgst:-false} 00:26:21.475 }, 00:26:21.475 "method": "bdev_nvme_attach_controller" 00:26:21.475 } 00:26:21.475 EOF 00:26:21.475 )") 00:26:21.475 10:26:40 -- nvmf/common.sh@542 -- # cat 00:26:21.475 10:26:40 -- nvmf/common.sh@544 -- # jq . 00:26:21.475 10:26:40 -- nvmf/common.sh@545 -- # IFS=, 00:26:21.475 10:26:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:21.475 "params": { 00:26:21.475 "name": "Nvme0", 00:26:21.475 "trtype": "tcp", 00:26:21.475 "traddr": "10.0.0.2", 00:26:21.475 "adrfam": "ipv4", 00:26:21.475 "trsvcid": "4420", 00:26:21.475 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:21.475 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:21.475 "hdgst": false, 00:26:21.475 "ddgst": false 00:26:21.475 }, 00:26:21.475 "method": "bdev_nvme_attach_controller" 00:26:21.476 },{ 00:26:21.476 "params": { 00:26:21.476 "name": "Nvme1", 00:26:21.476 "trtype": "tcp", 00:26:21.476 "traddr": "10.0.0.2", 00:26:21.476 "adrfam": "ipv4", 00:26:21.476 "trsvcid": "4420", 00:26:21.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:21.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:21.476 "hdgst": false, 00:26:21.476 "ddgst": false 00:26:21.476 }, 00:26:21.476 "method": "bdev_nvme_attach_controller" 00:26:21.476 },{ 00:26:21.476 "params": { 00:26:21.476 "name": "Nvme2", 00:26:21.476 "trtype": "tcp", 00:26:21.476 "traddr": "10.0.0.2", 00:26:21.476 "adrfam": "ipv4", 00:26:21.476 "trsvcid": "4420", 00:26:21.476 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:21.476 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:21.476 "hdgst": false, 00:26:21.476 "ddgst": false 00:26:21.476 }, 00:26:21.476 "method": "bdev_nvme_attach_controller" 00:26:21.476 }' 00:26:21.476 10:26:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:21.476 10:26:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:21.476 10:26:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:21.476 10:26:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:21.476 10:26:40 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:21.476 10:26:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:21.476 10:26:40 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:21.476 10:26:40 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:21.476 10:26:40 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:21.476 10:26:40 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:21.476 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:21.476 ... 00:26:21.476 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:21.476 ... 00:26:21.476 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:21.476 ... 00:26:21.476 fio-3.35 00:26:21.476 Starting 24 threads 00:26:22.055 [2024-11-19 10:26:41.569344] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:22.055 [2024-11-19 10:26:41.569407] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:34.277 00:26:34.277 filename0: (groupid=0, jobs=1): err= 0: pid=101832: Tue Nov 19 10:26:51 2024 00:26:34.277 read: IOPS=208, BW=832KiB/s (852kB/s)(8360KiB/10046msec) 00:26:34.277 slat (usec): min=7, max=8020, avg=15.20, stdev=175.26 00:26:34.277 clat (msec): min=25, max=157, avg=76.78, stdev=23.02 00:26:34.277 lat (msec): min=25, max=158, avg=76.80, stdev=23.02 00:26:34.277 clat percentiles (msec): 00:26:34.277 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:26:34.277 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:26:34.277 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:26:34.277 | 99.00th=[ 140], 99.50th=[ 157], 99.90th=[ 159], 99.95th=[ 159], 00:26:34.277 | 99.99th=[ 159] 00:26:34.277 bw ( KiB/s): min= 592, max= 1024, per=4.04%, avg=829.60, stdev=114.65, samples=20 00:26:34.277 iops : min= 148, max= 256, avg=207.40, stdev=28.66, samples=20 00:26:34.277 lat (msec) : 50=13.54%, 100=73.68%, 250=12.78% 00:26:34.277 cpu : usr=33.66%, sys=0.56%, ctx=898, majf=0, minf=9 00:26:34.277 IO depths : 1=1.0%, 2=2.2%, 4=10.6%, 8=73.8%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:34.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.277 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.277 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.277 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.277 filename0: (groupid=0, jobs=1): err= 0: pid=101833: Tue Nov 19 10:26:51 2024 00:26:34.277 read: IOPS=228, BW=913KiB/s (935kB/s)(9176KiB/10045msec) 00:26:34.277 slat (usec): min=4, max=8019, avg=17.40, stdev=236.45 00:26:34.277 clat (msec): min=26, max=167, avg=69.85, stdev=22.33 00:26:34.277 lat (msec): min=26, max=167, avg=69.87, stdev=22.33 00:26:34.277 clat percentiles (msec): 00:26:34.277 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 48], 00:26:34.277 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:26:34.278 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 105], 95.00th=[ 109], 00:26:34.278 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 167], 99.95th=[ 167], 00:26:34.278 | 99.99th=[ 167] 00:26:34.278 bw ( KiB/s): min= 640, max= 1248, per=4.44%, avg=911.20, stdev=152.16, samples=20 00:26:34.278 iops : min= 160, max= 312, avg=227.80, stdev=38.04, samples=20 00:26:34.278 lat (msec) : 50=28.33%, 100=61.29%, 250=10.37% 00:26:34.278 cpu : usr=32.42%, sys=0.73%, ctx=913, majf=0, minf=9 00:26:34.278 IO depths : 1=0.6%, 2=1.3%, 4=6.6%, 8=78.3%, 16=13.3%, 32=0.0%, >=64=0.0% 00:26:34.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.278 complete : 0=0.0%, 4=89.3%, 8=6.5%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.278 issued rwts: total=2294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.278 filename0: (groupid=0, jobs=1): err= 0: pid=101834: Tue Nov 19 10:26:51 2024 00:26:34.278 read: IOPS=217, BW=871KiB/s (892kB/s)(8744KiB/10040msec) 00:26:34.278 slat (usec): min=6, max=4026, avg=16.59, stdev=148.64 00:26:34.278 clat (msec): min=21, max=176, avg=73.28, stdev=24.04 00:26:34.278 lat (msec): min=21, max=176, avg=73.29, stdev=24.05 00:26:34.278 clat percentiles (msec): 00:26:34.278 | 1.00th=[ 26], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 53], 00:26:34.278 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 74], 00:26:34.278 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 107], 95.00th=[ 122], 00:26:34.278 | 99.00th=[ 144], 99.50th=[ 167], 99.90th=[ 169], 99.95th=[ 176], 00:26:34.278 | 99.99th=[ 176] 00:26:34.278 bw ( KiB/s): min= 636, max= 1156, per=4.22%, avg=867.60, stdev=156.05, samples=20 00:26:34.278 iops : min= 159, max= 289, avg=216.90, stdev=39.01, samples=20 00:26:34.278 lat (msec) : 50=17.84%, 100=68.53%, 250=13.63% 00:26:34.278 cpu : usr=42.64%, sys=0.91%, ctx=1188, majf=0, minf=9 00:26:34.278 IO depths : 1=2.4%, 2=5.4%, 4=15.8%, 8=65.9%, 16=10.5%, 32=0.0%, >=64=0.0% 00:26:34.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.278 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.278 issued rwts: total=2186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.278 filename0: (groupid=0, jobs=1): err= 0: pid=101835: Tue Nov 19 10:26:51 2024 00:26:34.278 read: IOPS=234, BW=939KiB/s (962kB/s)(9432KiB/10044msec) 00:26:34.278 slat (usec): min=6, max=4021, avg=12.53, stdev=82.69 00:26:34.278 clat (msec): min=24, max=180, avg=67.92, stdev=23.28 00:26:34.278 lat (msec): min=24, max=180, avg=67.93, stdev=23.28 00:26:34.278 clat percentiles (msec): 00:26:34.278 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 48], 00:26:34.278 | 30.00th=[ 53], 40.00th=[ 60], 50.00th=[ 65], 60.00th=[ 70], 00:26:34.278 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 102], 95.00th=[ 115], 00:26:34.278 | 99.00th=[ 138], 99.50th=[ 148], 99.90th=[ 171], 99.95th=[ 182], 00:26:34.278 | 99.99th=[ 182] 00:26:34.278 bw ( KiB/s): min= 512, max= 1280, per=4.58%, avg=940.50, stdev=196.68, samples=20 00:26:34.278 iops : min= 128, max= 320, avg=235.10, stdev=49.15, samples=20 00:26:34.278 lat (msec) : 50=25.28%, 100=63.95%, 250=10.77% 00:26:34.278 cpu : usr=41.89%, sys=1.00%, ctx=1222, majf=0, minf=9 00:26:34.278 IO depths : 1=0.9%, 2=2.2%, 4=9.0%, 8=75.4%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:34.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.278 complete : 0=0.0%, 4=89.8%, 8=5.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.278 issued rwts: total=2358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.278 filename0: (groupid=0, jobs=1): err= 0: pid=101836: Tue Nov 19 10:26:51 2024 00:26:34.278 read: IOPS=240, BW=961KiB/s (984kB/s)(9636KiB/10026msec) 00:26:34.278 slat (usec): min=3, max=8024, avg=14.74, stdev=163.41 00:26:34.278 clat (msec): min=5, max=166, avg=66.42, stdev=24.13 00:26:34.278 lat (msec): min=5, max=166, avg=66.43, stdev=24.13 00:26:34.278 clat percentiles (msec): 00:26:34.278 | 1.00th=[ 8], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 48], 00:26:34.278 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 70], 00:26:34.278 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 97], 95.00th=[ 111], 00:26:34.278 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 167], 00:26:34.278 | 99.99th=[ 167] 00:26:34.278 bw ( KiB/s): min= 600, max= 1200, per=4.68%, avg=961.20, stdev=164.86, samples=20 00:26:34.278 iops : min= 150, max= 300, avg=240.30, stdev=41.21, samples=20 00:26:34.278 lat (msec) : 10=1.33%, 20=0.66%, 50=25.49%, 100=63.72%, 250=8.80% 00:26:34.278 cpu : usr=44.52%, sys=1.26%, ctx=1096, majf=0, minf=9 00:26:34.278 IO depths : 1=0.9%, 2=2.0%, 4=9.3%, 8=75.3%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:34.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.278 complete : 0=0.0%, 4=89.7%, 8=5.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.278 issued rwts: total=2409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.278 filename0: (groupid=0, jobs=1): err= 0: pid=101837: Tue Nov 19 10:26:51 2024 00:26:34.278 read: IOPS=246, BW=986KiB/s (1010kB/s)(9896KiB/10034msec) 00:26:34.278 slat (usec): min=7, max=8021, avg=24.22, stdev=274.37 00:26:34.278 clat (msec): min=3, max=185, avg=64.71, stdev=24.63 00:26:34.278 lat (msec): min=3, max=185, avg=64.73, stdev=24.63 00:26:34.278 clat percentiles (msec): 00:26:34.278 | 1.00th=[ 6], 5.00th=[ 27], 10.00th=[ 41], 20.00th=[ 48], 00:26:34.278 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 70], 00:26:34.278 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 107], 00:26:34.278 | 99.00th=[ 130], 99.50th=[ 146], 99.90th=[ 186], 99.95th=[ 186], 00:26:34.278 | 99.99th=[ 186] 00:26:34.278 bw ( KiB/s): min= 640, max= 1720, per=4.78%, avg=982.95, stdev=239.68, samples=20 00:26:34.278 iops : min= 160, max= 430, avg=245.70, stdev=59.95, samples=20 00:26:34.278 lat (msec) : 4=0.65%, 10=3.15%, 20=1.01%, 50=21.46%, 100=65.48% 00:26:34.278 lat (msec) : 250=8.25% 00:26:34.278 cpu : usr=42.88%, sys=0.93%, ctx=1325, majf=0, minf=9 00:26:34.278 IO depths : 1=0.9%, 2=2.0%, 4=7.6%, 8=76.8%, 16=12.7%, 32=0.0%, >=64=0.0% 00:26:34.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.278 complete : 0=0.0%, 4=89.6%, 8=5.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.278 issued rwts: total=2474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.278 filename0: (groupid=0, jobs=1): err= 0: pid=101838: Tue Nov 19 10:26:51 2024 00:26:34.278 read: IOPS=202, BW=811KiB/s (830kB/s)(8116KiB/10011msec) 00:26:34.278 slat (usec): min=4, max=6021, avg=23.65, stdev=229.09 00:26:34.278 clat (msec): min=34, max=161, avg=78.76, stdev=24.51 00:26:34.278 lat (msec): min=34, max=161, avg=78.78, stdev=24.51 00:26:34.278 clat percentiles (msec): 00:26:34.278 | 1.00th=[ 37], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 58], 00:26:34.278 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 84], 00:26:34.278 | 70.00th=[ 89], 80.00th=[ 99], 90.00th=[ 111], 95.00th=[ 126], 00:26:34.278 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 163], 99.95th=[ 163], 00:26:34.278 | 99.99th=[ 163] 00:26:34.278 bw ( KiB/s): min= 512, max= 1120, per=3.92%, avg=804.68, stdev=195.21, samples=19 00:26:34.278 iops : min= 128, max= 280, avg=201.16, stdev=48.80, samples=19 00:26:34.278 lat (msec) : 50=12.91%, 100=67.92%, 250=19.17% 00:26:34.278 cpu : usr=42.85%, sys=1.04%, ctx=1290, majf=0, minf=9 00:26:34.278 IO depths : 1=2.1%, 2=4.6%, 4=13.1%, 8=69.1%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:34.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.278 complete : 0=0.0%, 4=90.9%, 8=4.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.278 issued rwts: total=2029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.278 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.278 filename0: (groupid=0, jobs=1): err= 0: pid=101839: Tue Nov 19 10:26:51 2024 00:26:34.278 read: IOPS=190, BW=761KiB/s (779kB/s)(7616KiB/10008msec) 00:26:34.278 slat (usec): min=5, max=8021, avg=17.99, stdev=205.25 00:26:34.278 clat (msec): min=7, max=183, avg=83.97, stdev=24.53 00:26:34.278 lat (msec): min=7, max=183, avg=83.98, stdev=24.53 00:26:34.279 clat percentiles (msec): 00:26:34.279 | 1.00th=[ 36], 5.00th=[ 50], 10.00th=[ 60], 20.00th=[ 67], 00:26:34.279 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 85], 00:26:34.279 | 70.00th=[ 93], 80.00th=[ 103], 90.00th=[ 110], 95.00th=[ 136], 00:26:34.279 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 184], 99.95th=[ 184], 00:26:34.279 | 99.99th=[ 184] 00:26:34.279 bw ( KiB/s): min= 512, max= 896, per=3.65%, avg=749.47, stdev=142.14, samples=19 00:26:34.279 iops : min= 128, max= 224, avg=187.37, stdev=35.53, samples=19 00:26:34.279 lat (msec) : 10=0.53%, 50=5.67%, 100=73.16%, 250=20.64% 00:26:34.279 cpu : usr=36.71%, sys=0.68%, ctx=1022, majf=0, minf=9 00:26:34.279 IO depths : 1=2.0%, 2=5.2%, 4=16.0%, 8=65.7%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:34.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.279 complete : 0=0.0%, 4=91.5%, 8=3.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.279 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.279 filename1: (groupid=0, jobs=1): err= 0: pid=101840: Tue Nov 19 10:26:51 2024 00:26:34.279 read: IOPS=187, BW=752KiB/s (770kB/s)(7520KiB/10001msec) 00:26:34.279 slat (nsec): min=4638, max=35155, avg=11120.78, stdev=3876.81 00:26:34.279 clat (msec): min=4, max=168, avg=85.03, stdev=26.07 00:26:34.279 lat (msec): min=4, max=168, avg=85.04, stdev=26.07 00:26:34.279 clat percentiles (msec): 00:26:34.279 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 67], 00:26:34.279 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 87], 00:26:34.279 | 70.00th=[ 96], 80.00th=[ 104], 90.00th=[ 115], 95.00th=[ 140], 00:26:34.279 | 99.00th=[ 159], 99.50th=[ 159], 99.90th=[ 169], 99.95th=[ 169], 00:26:34.279 | 99.99th=[ 169] 00:26:34.279 bw ( KiB/s): min= 464, max= 1072, per=3.59%, avg=736.21, stdev=147.50, samples=19 00:26:34.279 iops : min= 116, max= 268, avg=183.95, stdev=36.89, samples=19 00:26:34.279 lat (msec) : 10=0.85%, 50=5.00%, 100=69.52%, 250=24.63% 00:26:34.279 cpu : usr=36.07%, sys=0.93%, ctx=1050, majf=0, minf=9 00:26:34.279 IO depths : 1=2.6%, 2=5.4%, 4=15.5%, 8=65.8%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:34.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.279 complete : 0=0.0%, 4=91.5%, 8=3.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.279 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.279 filename1: (groupid=0, jobs=1): err= 0: pid=101841: Tue Nov 19 10:26:51 2024 00:26:34.279 read: IOPS=188, BW=755KiB/s (773kB/s)(7552KiB/10004msec) 00:26:34.279 slat (nsec): min=3729, max=37344, avg=11532.33, stdev=4113.51 00:26:34.279 clat (msec): min=5, max=198, avg=84.69, stdev=24.91 00:26:34.279 lat (msec): min=5, max=198, avg=84.70, stdev=24.91 00:26:34.279 clat percentiles (msec): 00:26:34.279 | 1.00th=[ 24], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 69], 00:26:34.279 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 86], 00:26:34.279 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 120], 95.00th=[ 132], 00:26:34.279 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 199], 99.95th=[ 199], 00:26:34.279 | 99.99th=[ 199] 00:26:34.279 bw ( KiB/s): min= 512, max= 896, per=3.60%, avg=740.21, stdev=120.82, samples=19 00:26:34.279 iops : min= 128, max= 224, avg=184.95, stdev=30.17, samples=19 00:26:34.279 lat (msec) : 10=0.85%, 50=4.71%, 100=71.35%, 250=23.09% 00:26:34.279 cpu : usr=34.13%, sys=0.79%, ctx=970, majf=0, minf=9 00:26:34.279 IO depths : 1=3.3%, 2=6.9%, 4=16.4%, 8=63.9%, 16=9.5%, 32=0.0%, >=64=0.0% 00:26:34.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.279 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.279 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.279 filename1: (groupid=0, jobs=1): err= 0: pid=101842: Tue Nov 19 10:26:51 2024 00:26:34.279 read: IOPS=250, BW=1002KiB/s (1026kB/s)(9.80MiB/10008msec) 00:26:34.279 slat (usec): min=4, max=4019, avg=12.55, stdev=80.85 00:26:34.279 clat (msec): min=5, max=134, avg=63.78, stdev=22.14 00:26:34.279 lat (msec): min=5, max=134, avg=63.79, stdev=22.14 00:26:34.279 clat percentiles (msec): 00:26:34.279 | 1.00th=[ 6], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 48], 00:26:34.279 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 70], 00:26:34.279 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 105], 00:26:34.279 | 99.00th=[ 125], 99.50th=[ 130], 99.90th=[ 134], 99.95th=[ 134], 00:26:34.279 | 99.99th=[ 134] 00:26:34.279 bw ( KiB/s): min= 688, max= 1280, per=4.85%, avg=996.80, stdev=173.50, samples=20 00:26:34.279 iops : min= 172, max= 320, avg=249.20, stdev=43.38, samples=20 00:26:34.279 lat (msec) : 10=1.91%, 20=1.28%, 50=26.16%, 100=64.51%, 250=6.14% 00:26:34.279 cpu : usr=38.92%, sys=0.85%, ctx=1093, majf=0, minf=9 00:26:34.279 IO depths : 1=0.8%, 2=1.8%, 4=7.7%, 8=76.5%, 16=13.3%, 32=0.0%, >=64=0.0% 00:26:34.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.279 complete : 0=0.0%, 4=89.7%, 8=6.3%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.279 issued rwts: total=2508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.279 filename1: (groupid=0, jobs=1): err= 0: pid=101843: Tue Nov 19 10:26:51 2024 00:26:34.279 read: IOPS=194, BW=780KiB/s (798kB/s)(7796KiB/10001msec) 00:26:34.279 slat (nsec): min=7922, max=42015, avg=11563.56, stdev=4592.07 00:26:34.279 clat (msec): min=9, max=170, avg=82.02, stdev=24.23 00:26:34.279 lat (msec): min=9, max=170, avg=82.03, stdev=24.23 00:26:34.279 clat percentiles (msec): 00:26:34.279 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 53], 20.00th=[ 64], 00:26:34.279 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 86], 00:26:34.279 | 70.00th=[ 94], 80.00th=[ 103], 90.00th=[ 111], 95.00th=[ 125], 00:26:34.279 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:26:34.279 | 99.99th=[ 171] 00:26:34.279 bw ( KiB/s): min= 512, max= 1024, per=3.70%, avg=759.21, stdev=142.23, samples=19 00:26:34.279 iops : min= 128, max= 256, avg=189.68, stdev=35.60, samples=19 00:26:34.279 lat (msec) : 10=0.31%, 20=0.51%, 50=6.57%, 100=71.06%, 250=21.55% 00:26:34.279 cpu : usr=42.66%, sys=1.03%, ctx=1301, majf=0, minf=9 00:26:34.279 IO depths : 1=2.4%, 2=5.2%, 4=14.3%, 8=67.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:34.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.279 complete : 0=0.0%, 4=91.4%, 8=3.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.279 issued rwts: total=1949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.279 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.279 filename1: (groupid=0, jobs=1): err= 0: pid=101844: Tue Nov 19 10:26:51 2024 00:26:34.279 read: IOPS=188, BW=755KiB/s (774kB/s)(7564KiB/10013msec) 00:26:34.279 slat (nsec): min=4649, max=35797, avg=11221.52, stdev=3856.78 00:26:34.279 clat (msec): min=35, max=191, avg=84.64, stdev=25.16 00:26:34.279 lat (msec): min=35, max=191, avg=84.65, stdev=25.16 00:26:34.279 clat percentiles (msec): 00:26:34.279 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 69], 00:26:34.279 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 86], 00:26:34.279 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 120], 95.00th=[ 132], 00:26:34.279 | 99.00th=[ 157], 99.50th=[ 171], 99.90th=[ 192], 99.95th=[ 192], 00:26:34.279 | 99.99th=[ 192] 00:26:34.279 bw ( KiB/s): min= 512, max= 928, per=3.65%, avg=750.00, stdev=140.97, samples=20 00:26:34.279 iops : min= 128, max= 232, avg=187.50, stdev=35.24, samples=20 00:26:34.279 lat (msec) : 50=8.51%, 100=67.11%, 250=24.38% 00:26:34.279 cpu : usr=33.33%, sys=0.88%, ctx=914, majf=0, minf=9 00:26:34.279 IO depths : 1=3.1%, 2=6.6%, 4=15.8%, 8=64.7%, 16=9.9%, 32=0.0%, >=64=0.0% 00:26:34.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.280 complete : 0=0.0%, 4=91.8%, 8=2.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.280 issued rwts: total=1891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.280 filename1: (groupid=0, jobs=1): err= 0: pid=101845: Tue Nov 19 10:26:51 2024 00:26:34.280 read: IOPS=198, BW=794KiB/s (813kB/s)(7944KiB/10008msec) 00:26:34.280 slat (usec): min=7, max=10017, avg=16.71, stdev=224.58 00:26:34.280 clat (msec): min=28, max=159, avg=80.51, stdev=23.93 00:26:34.280 lat (msec): min=28, max=159, avg=80.53, stdev=23.93 00:26:34.280 clat percentiles (msec): 00:26:34.280 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 52], 20.00th=[ 61], 00:26:34.280 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:26:34.280 | 70.00th=[ 91], 80.00th=[ 100], 90.00th=[ 117], 95.00th=[ 129], 00:26:34.280 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 161], 99.95th=[ 161], 00:26:34.280 | 99.99th=[ 161] 00:26:34.280 bw ( KiB/s): min= 512, max= 976, per=3.81%, avg=784.00, stdev=136.08, samples=19 00:26:34.280 iops : min= 128, max= 244, avg=196.00, stdev=34.02, samples=19 00:26:34.280 lat (msec) : 50=9.06%, 100=72.76%, 250=18.18% 00:26:34.280 cpu : usr=38.33%, sys=0.90%, ctx=1159, majf=0, minf=9 00:26:34.280 IO depths : 1=1.4%, 2=3.0%, 4=9.3%, 8=73.0%, 16=13.3%, 32=0.0%, >=64=0.0% 00:26:34.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.280 complete : 0=0.0%, 4=90.5%, 8=5.8%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.280 issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.280 filename1: (groupid=0, jobs=1): err= 0: pid=101846: Tue Nov 19 10:26:51 2024 00:26:34.280 read: IOPS=217, BW=871KiB/s (892kB/s)(8744KiB/10036msec) 00:26:34.280 slat (usec): min=6, max=8020, avg=17.48, stdev=242.25 00:26:34.280 clat (msec): min=33, max=155, avg=73.34, stdev=22.49 00:26:34.280 lat (msec): min=33, max=155, avg=73.36, stdev=22.49 00:26:34.280 clat percentiles (msec): 00:26:34.280 | 1.00th=[ 38], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:26:34.280 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:26:34.280 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 108], 95.00th=[ 111], 00:26:34.280 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:26:34.280 | 99.99th=[ 157] 00:26:34.280 bw ( KiB/s): min= 640, max= 1104, per=4.22%, avg=866.15, stdev=140.15, samples=20 00:26:34.280 iops : min= 160, max= 276, avg=216.50, stdev=35.08, samples=20 00:26:34.280 lat (msec) : 50=19.26%, 100=70.13%, 250=10.61% 00:26:34.280 cpu : usr=32.43%, sys=0.76%, ctx=892, majf=0, minf=9 00:26:34.280 IO depths : 1=1.5%, 2=3.2%, 4=11.1%, 8=71.9%, 16=12.3%, 32=0.0%, >=64=0.0% 00:26:34.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.280 complete : 0=0.0%, 4=90.7%, 8=4.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.280 issued rwts: total=2186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.280 filename1: (groupid=0, jobs=1): err= 0: pid=101847: Tue Nov 19 10:26:51 2024 00:26:34.280 read: IOPS=217, BW=871KiB/s (892kB/s)(8748KiB/10045msec) 00:26:34.280 slat (usec): min=3, max=8020, avg=16.35, stdev=191.50 00:26:34.280 clat (msec): min=15, max=177, avg=73.38, stdev=25.78 00:26:34.280 lat (msec): min=15, max=177, avg=73.40, stdev=25.78 00:26:34.280 clat percentiles (msec): 00:26:34.280 | 1.00th=[ 17], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 52], 00:26:34.280 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 73], 00:26:34.280 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 122], 00:26:34.280 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 178], 99.95th=[ 178], 00:26:34.280 | 99.99th=[ 178] 00:26:34.280 bw ( KiB/s): min= 544, max= 1168, per=4.23%, avg=868.40, stdev=177.69, samples=20 00:26:34.280 iops : min= 136, max= 292, avg=217.10, stdev=44.42, samples=20 00:26:34.280 lat (msec) : 20=1.46%, 50=18.34%, 100=66.85%, 250=13.35% 00:26:34.280 cpu : usr=34.46%, sys=0.90%, ctx=988, majf=0, minf=9 00:26:34.280 IO depths : 1=1.0%, 2=2.3%, 4=9.4%, 8=74.7%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:34.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.280 complete : 0=0.0%, 4=89.9%, 8=5.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.280 issued rwts: total=2187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.280 filename2: (groupid=0, jobs=1): err= 0: pid=101848: Tue Nov 19 10:26:51 2024 00:26:34.280 read: IOPS=201, BW=806KiB/s (825kB/s)(8084KiB/10029msec) 00:26:34.280 slat (nsec): min=4698, max=60108, avg=11218.86, stdev=4316.14 00:26:34.280 clat (msec): min=33, max=166, avg=79.33, stdev=25.86 00:26:34.280 lat (msec): min=33, max=166, avg=79.34, stdev=25.86 00:26:34.280 clat percentiles (msec): 00:26:34.280 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 56], 00:26:34.280 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 85], 00:26:34.280 | 70.00th=[ 93], 80.00th=[ 101], 90.00th=[ 116], 95.00th=[ 129], 00:26:34.280 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 167], 99.95th=[ 167], 00:26:34.280 | 99.99th=[ 167] 00:26:34.280 bw ( KiB/s): min= 592, max= 1120, per=3.90%, avg=800.60, stdev=159.84, samples=20 00:26:34.280 iops : min= 148, max= 280, avg=200.10, stdev=39.99, samples=20 00:26:34.280 lat (msec) : 50=14.79%, 100=65.17%, 250=20.04% 00:26:34.280 cpu : usr=37.33%, sys=0.79%, ctx=1051, majf=0, minf=9 00:26:34.280 IO depths : 1=0.7%, 2=1.6%, 4=7.9%, 8=76.0%, 16=13.8%, 32=0.0%, >=64=0.0% 00:26:34.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.280 complete : 0=0.0%, 4=89.6%, 8=6.7%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.280 issued rwts: total=2021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.280 filename2: (groupid=0, jobs=1): err= 0: pid=101849: Tue Nov 19 10:26:51 2024 00:26:34.280 read: IOPS=194, BW=778KiB/s (797kB/s)(7788KiB/10004msec) 00:26:34.280 slat (usec): min=4, max=8048, avg=27.94, stdev=363.18 00:26:34.280 clat (msec): min=6, max=179, avg=82.03, stdev=24.75 00:26:34.280 lat (msec): min=6, max=179, avg=82.05, stdev=24.76 00:26:34.280 clat percentiles (msec): 00:26:34.280 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 62], 00:26:34.280 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 86], 00:26:34.280 | 70.00th=[ 96], 80.00th=[ 97], 90.00th=[ 109], 95.00th=[ 124], 00:26:34.280 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 180], 99.95th=[ 180], 00:26:34.280 | 99.99th=[ 180] 00:26:34.280 bw ( KiB/s): min= 560, max= 976, per=3.68%, avg=755.68, stdev=118.14, samples=19 00:26:34.280 iops : min= 140, max= 244, avg=188.84, stdev=29.47, samples=19 00:26:34.280 lat (msec) : 10=0.82%, 50=8.58%, 100=71.75%, 250=18.85% 00:26:34.280 cpu : usr=33.72%, sys=0.87%, ctx=982, majf=0, minf=9 00:26:34.280 IO depths : 1=1.7%, 2=3.6%, 4=11.1%, 8=71.5%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:34.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.280 complete : 0=0.0%, 4=90.5%, 8=5.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.280 issued rwts: total=1947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.280 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.280 filename2: (groupid=0, jobs=1): err= 0: pid=101850: Tue Nov 19 10:26:51 2024 00:26:34.280 read: IOPS=242, BW=972KiB/s (995kB/s)(9748KiB/10032msec) 00:26:34.280 slat (usec): min=7, max=4027, avg=12.75, stdev=81.46 00:26:34.280 clat (msec): min=21, max=159, avg=65.68, stdev=21.65 00:26:34.280 lat (msec): min=21, max=159, avg=65.70, stdev=21.66 00:26:34.280 clat percentiles (msec): 00:26:34.280 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 44], 20.00th=[ 48], 00:26:34.280 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 68], 00:26:34.280 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 95], 95.00th=[ 108], 00:26:34.280 | 99.00th=[ 132], 99.50th=[ 157], 99.90th=[ 159], 99.95th=[ 159], 00:26:34.280 | 99.99th=[ 159] 00:26:34.280 bw ( KiB/s): min= 640, max= 1280, per=4.72%, avg=970.20, stdev=182.43, samples=20 00:26:34.280 iops : min= 160, max= 320, avg=242.50, stdev=45.58, samples=20 00:26:34.280 lat (msec) : 50=27.82%, 100=65.49%, 250=6.69% 00:26:34.280 cpu : usr=42.69%, sys=1.10%, ctx=1276, majf=0, minf=9 00:26:34.280 IO depths : 1=0.2%, 2=0.7%, 4=7.1%, 8=78.3%, 16=13.7%, 32=0.0%, >=64=0.0% 00:26:34.280 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.280 complete : 0=0.0%, 4=89.5%, 8=6.4%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.280 issued rwts: total=2437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.281 filename2: (groupid=0, jobs=1): err= 0: pid=101851: Tue Nov 19 10:26:51 2024 00:26:34.281 read: IOPS=211, BW=845KiB/s (865kB/s)(8480KiB/10040msec) 00:26:34.281 slat (usec): min=7, max=8018, avg=20.03, stdev=260.81 00:26:34.281 clat (msec): min=36, max=189, avg=75.69, stdev=23.62 00:26:34.281 lat (msec): min=36, max=189, avg=75.71, stdev=23.62 00:26:34.281 clat percentiles (msec): 00:26:34.281 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:26:34.281 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:26:34.281 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:26:34.281 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 190], 99.95th=[ 190], 00:26:34.281 | 99.99th=[ 190] 00:26:34.281 bw ( KiB/s): min= 556, max= 1120, per=4.09%, avg=839.25, stdev=144.91, samples=20 00:26:34.281 iops : min= 139, max= 280, avg=209.75, stdev=36.22, samples=20 00:26:34.281 lat (msec) : 50=17.97%, 100=69.72%, 250=12.31% 00:26:34.281 cpu : usr=32.47%, sys=0.68%, ctx=907, majf=0, minf=9 00:26:34.281 IO depths : 1=0.7%, 2=1.6%, 4=8.7%, 8=76.0%, 16=13.0%, 32=0.0%, >=64=0.0% 00:26:34.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.281 complete : 0=0.0%, 4=89.6%, 8=5.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.281 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.281 filename2: (groupid=0, jobs=1): err= 0: pid=101852: Tue Nov 19 10:26:51 2024 00:26:34.281 read: IOPS=212, BW=852KiB/s (872kB/s)(8552KiB/10043msec) 00:26:34.281 slat (usec): min=7, max=8027, avg=14.97, stdev=173.42 00:26:34.281 clat (msec): min=22, max=178, avg=75.00, stdev=22.43 00:26:34.281 lat (msec): min=22, max=178, avg=75.02, stdev=22.43 00:26:34.281 clat percentiles (msec): 00:26:34.281 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 59], 00:26:34.281 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:26:34.281 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 105], 95.00th=[ 117], 00:26:34.281 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 178], 99.95th=[ 180], 00:26:34.281 | 99.99th=[ 180] 00:26:34.281 bw ( KiB/s): min= 640, max= 1024, per=4.13%, avg=848.80, stdev=108.30, samples=20 00:26:34.281 iops : min= 160, max= 256, avg=212.20, stdev=27.07, samples=20 00:26:34.281 lat (msec) : 50=13.66%, 100=73.85%, 250=12.49% 00:26:34.281 cpu : usr=39.64%, sys=1.01%, ctx=1157, majf=0, minf=9 00:26:34.281 IO depths : 1=2.7%, 2=5.7%, 4=15.0%, 8=66.2%, 16=10.5%, 32=0.0%, >=64=0.0% 00:26:34.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.281 complete : 0=0.0%, 4=91.3%, 8=3.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.281 issued rwts: total=2138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.281 filename2: (groupid=0, jobs=1): err= 0: pid=101853: Tue Nov 19 10:26:51 2024 00:26:34.281 read: IOPS=193, BW=775KiB/s (794kB/s)(7760KiB/10012msec) 00:26:34.281 slat (usec): min=3, max=4024, avg=13.30, stdev=91.22 00:26:34.281 clat (msec): min=25, max=154, avg=82.41, stdev=24.98 00:26:34.281 lat (msec): min=25, max=154, avg=82.43, stdev=24.98 00:26:34.281 clat percentiles (msec): 00:26:34.281 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 63], 00:26:34.281 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 85], 00:26:34.281 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 111], 95.00th=[ 130], 00:26:34.281 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:26:34.281 | 99.99th=[ 155] 00:26:34.281 bw ( KiB/s): min= 512, max= 1120, per=3.77%, avg=775.20, stdev=162.73, samples=20 00:26:34.281 iops : min= 128, max= 280, avg=193.80, stdev=40.68, samples=20 00:26:34.281 lat (msec) : 50=9.90%, 100=64.69%, 250=25.41% 00:26:34.281 cpu : usr=34.45%, sys=0.75%, ctx=987, majf=0, minf=9 00:26:34.281 IO depths : 1=2.6%, 2=5.7%, 4=14.8%, 8=66.2%, 16=10.6%, 32=0.0%, >=64=0.0% 00:26:34.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.281 complete : 0=0.0%, 4=91.5%, 8=3.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.281 issued rwts: total=1940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.281 filename2: (groupid=0, jobs=1): err= 0: pid=101854: Tue Nov 19 10:26:51 2024 00:26:34.281 read: IOPS=215, BW=861KiB/s (882kB/s)(8636KiB/10027msec) 00:26:34.281 slat (usec): min=7, max=8030, avg=16.94, stdev=193.04 00:26:34.281 clat (msec): min=32, max=143, avg=74.19, stdev=20.96 00:26:34.281 lat (msec): min=32, max=143, avg=74.21, stdev=20.96 00:26:34.281 clat percentiles (msec): 00:26:34.281 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 54], 00:26:34.281 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:26:34.281 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 103], 95.00th=[ 113], 00:26:34.281 | 99.00th=[ 131], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 144], 00:26:34.281 | 99.99th=[ 144] 00:26:34.281 bw ( KiB/s): min= 568, max= 1328, per=4.16%, avg=855.95, stdev=165.84, samples=20 00:26:34.281 iops : min= 142, max= 332, avg=213.95, stdev=41.44, samples=20 00:26:34.281 lat (msec) : 50=15.24%, 100=74.48%, 250=10.28% 00:26:34.281 cpu : usr=41.71%, sys=0.88%, ctx=1273, majf=0, minf=9 00:26:34.281 IO depths : 1=2.5%, 2=5.6%, 4=14.8%, 8=66.7%, 16=10.4%, 32=0.0%, >=64=0.0% 00:26:34.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.281 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.281 issued rwts: total=2159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.281 filename2: (groupid=0, jobs=1): err= 0: pid=101855: Tue Nov 19 10:26:51 2024 00:26:34.281 read: IOPS=257, BW=1028KiB/s (1053kB/s)(10.1MiB/10035msec) 00:26:34.281 slat (nsec): min=7557, max=59900, avg=10745.45, stdev=3886.52 00:26:34.281 clat (usec): min=1625, max=142949, avg=62092.14, stdev=22756.62 00:26:34.281 lat (usec): min=1633, max=142958, avg=62102.89, stdev=22756.59 00:26:34.281 clat percentiles (msec): 00:26:34.281 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 41], 20.00th=[ 48], 00:26:34.281 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 69], 00:26:34.281 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 91], 95.00th=[ 103], 00:26:34.281 | 99.00th=[ 116], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:26:34.281 | 99.99th=[ 144] 00:26:34.281 bw ( KiB/s): min= 720, max= 1904, per=5.00%, avg=1026.95, stdev=250.65, samples=20 00:26:34.281 iops : min= 180, max= 476, avg=256.70, stdev=62.70, samples=20 00:26:34.281 lat (msec) : 2=0.62%, 4=0.62%, 10=3.10%, 20=1.24%, 50=25.31% 00:26:34.281 lat (msec) : 100=63.76%, 250=5.35% 00:26:34.281 cpu : usr=43.54%, sys=1.04%, ctx=1558, majf=0, minf=9 00:26:34.281 IO depths : 1=1.3%, 2=2.6%, 4=9.9%, 8=74.3%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:34.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.282 complete : 0=0.0%, 4=89.9%, 8=5.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.282 issued rwts: total=2580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.282 00:26:34.282 Run status group 0 (all jobs): 00:26:34.282 READ: bw=20.0MiB/s (21.0MB/s), 752KiB/s-1028KiB/s (770kB/s-1053kB/s), io=201MiB (211MB), run=10001-10046msec 00:26:34.282 10:26:51 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:34.282 10:26:51 -- target/dif.sh@43 -- # local sub 00:26:34.282 10:26:51 -- target/dif.sh@45 -- # for sub in "$@" 00:26:34.282 10:26:51 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:34.282 10:26:51 -- target/dif.sh@36 -- # local sub_id=0 00:26:34.282 10:26:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:34.282 10:26:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.282 10:26:51 -- common/autotest_common.sh@10 -- # set +x 00:26:34.282 10:26:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.282 10:26:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:34.282 10:26:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.282 10:26:51 -- common/autotest_common.sh@10 -- # set +x 00:26:34.282 10:26:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.282 10:26:51 -- target/dif.sh@45 -- # for sub in "$@" 00:26:34.282 10:26:51 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:34.282 10:26:51 -- target/dif.sh@36 -- # local sub_id=1 00:26:34.282 10:26:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:34.282 10:26:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.282 10:26:51 -- common/autotest_common.sh@10 -- # set +x 00:26:34.282 10:26:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.282 10:26:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:34.282 10:26:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.282 10:26:51 -- common/autotest_common.sh@10 -- # set +x 00:26:34.282 10:26:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.282 10:26:51 -- target/dif.sh@45 -- # for sub in "$@" 00:26:34.282 10:26:51 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:34.282 10:26:51 -- target/dif.sh@36 -- # local sub_id=2 00:26:34.282 10:26:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:34.282 10:26:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.282 10:26:51 -- common/autotest_common.sh@10 -- # set +x 00:26:34.282 10:26:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.282 10:26:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:34.282 10:26:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.282 10:26:51 -- common/autotest_common.sh@10 -- # set +x 00:26:34.282 10:26:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.282 10:26:51 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:34.282 10:26:51 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:34.282 10:26:51 -- target/dif.sh@115 -- # numjobs=2 00:26:34.282 10:26:51 -- target/dif.sh@115 -- # iodepth=8 00:26:34.282 10:26:51 -- target/dif.sh@115 -- # runtime=5 00:26:34.282 10:26:51 -- target/dif.sh@115 -- # files=1 00:26:34.282 10:26:51 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:34.282 10:26:51 -- target/dif.sh@28 -- # local sub 00:26:34.282 10:26:52 -- target/dif.sh@30 -- # for sub in "$@" 00:26:34.282 10:26:52 -- target/dif.sh@31 -- # create_subsystem 0 00:26:34.282 10:26:52 -- target/dif.sh@18 -- # local sub_id=0 00:26:34.282 10:26:52 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:34.282 10:26:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.282 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:26:34.282 bdev_null0 00:26:34.282 10:26:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.282 10:26:52 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:34.282 10:26:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.282 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:26:34.282 10:26:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.282 10:26:52 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:34.282 10:26:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.282 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:26:34.282 10:26:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.282 10:26:52 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:34.282 10:26:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.282 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:26:34.282 [2024-11-19 10:26:52.030706] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.282 10:26:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.282 10:26:52 -- target/dif.sh@30 -- # for sub in "$@" 00:26:34.282 10:26:52 -- target/dif.sh@31 -- # create_subsystem 1 00:26:34.282 10:26:52 -- target/dif.sh@18 -- # local sub_id=1 00:26:34.282 10:26:52 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:34.282 10:26:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.282 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:26:34.282 bdev_null1 00:26:34.282 10:26:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.282 10:26:52 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:34.282 10:26:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.282 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:26:34.282 10:26:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.282 10:26:52 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:34.282 10:26:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.282 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:26:34.282 10:26:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.282 10:26:52 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:34.282 10:26:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.282 10:26:52 -- common/autotest_common.sh@10 -- # set +x 00:26:34.282 10:26:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.282 10:26:52 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:34.282 10:26:52 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:34.282 10:26:52 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:34.282 10:26:52 -- nvmf/common.sh@520 -- # config=() 00:26:34.282 10:26:52 -- nvmf/common.sh@520 -- # local subsystem config 00:26:34.282 10:26:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:34.282 10:26:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:34.282 { 00:26:34.282 "params": { 00:26:34.282 "name": "Nvme$subsystem", 00:26:34.282 "trtype": "$TEST_TRANSPORT", 00:26:34.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.282 "adrfam": "ipv4", 00:26:34.282 "trsvcid": "$NVMF_PORT", 00:26:34.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.282 "hdgst": ${hdgst:-false}, 00:26:34.282 "ddgst": ${ddgst:-false} 00:26:34.282 }, 00:26:34.282 "method": "bdev_nvme_attach_controller" 00:26:34.282 } 00:26:34.282 EOF 00:26:34.282 )") 00:26:34.282 10:26:52 -- target/dif.sh@82 -- # gen_fio_conf 00:26:34.282 10:26:52 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:34.282 10:26:52 -- target/dif.sh@54 -- # local file 00:26:34.282 10:26:52 -- target/dif.sh@56 -- # cat 00:26:34.282 10:26:52 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:34.282 10:26:52 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:34.282 10:26:52 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:34.282 10:26:52 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:34.282 10:26:52 -- nvmf/common.sh@542 -- # cat 00:26:34.282 10:26:52 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:34.282 10:26:52 -- common/autotest_common.sh@1330 -- # shift 00:26:34.282 10:26:52 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:34.283 10:26:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:34.283 10:26:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:34.283 10:26:52 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:34.283 10:26:52 -- target/dif.sh@72 -- # (( file <= files )) 00:26:34.283 10:26:52 -- target/dif.sh@73 -- # cat 00:26:34.283 10:26:52 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:34.283 10:26:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:34.283 10:26:52 -- target/dif.sh@72 -- # (( file++ )) 00:26:34.283 10:26:52 -- target/dif.sh@72 -- # (( file <= files )) 00:26:34.283 10:26:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:34.283 10:26:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:34.283 { 00:26:34.283 "params": { 00:26:34.283 "name": "Nvme$subsystem", 00:26:34.283 "trtype": "$TEST_TRANSPORT", 00:26:34.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.283 "adrfam": "ipv4", 00:26:34.283 "trsvcid": "$NVMF_PORT", 00:26:34.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.283 "hdgst": ${hdgst:-false}, 00:26:34.283 "ddgst": ${ddgst:-false} 00:26:34.283 }, 00:26:34.283 "method": "bdev_nvme_attach_controller" 00:26:34.283 } 00:26:34.283 EOF 00:26:34.283 )") 00:26:34.283 10:26:52 -- nvmf/common.sh@542 -- # cat 00:26:34.283 10:26:52 -- nvmf/common.sh@544 -- # jq . 00:26:34.283 10:26:52 -- nvmf/common.sh@545 -- # IFS=, 00:26:34.283 10:26:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:34.283 "params": { 00:26:34.283 "name": "Nvme0", 00:26:34.283 "trtype": "tcp", 00:26:34.283 "traddr": "10.0.0.2", 00:26:34.283 "adrfam": "ipv4", 00:26:34.283 "trsvcid": "4420", 00:26:34.283 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:34.283 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:34.283 "hdgst": false, 00:26:34.283 "ddgst": false 00:26:34.283 }, 00:26:34.283 "method": "bdev_nvme_attach_controller" 00:26:34.283 },{ 00:26:34.283 "params": { 00:26:34.283 "name": "Nvme1", 00:26:34.283 "trtype": "tcp", 00:26:34.283 "traddr": "10.0.0.2", 00:26:34.283 "adrfam": "ipv4", 00:26:34.283 "trsvcid": "4420", 00:26:34.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:34.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:34.283 "hdgst": false, 00:26:34.283 "ddgst": false 00:26:34.283 }, 00:26:34.283 "method": "bdev_nvme_attach_controller" 00:26:34.283 }' 00:26:34.283 10:26:52 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:34.283 10:26:52 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:34.283 10:26:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:34.283 10:26:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:34.283 10:26:52 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:34.283 10:26:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:34.283 10:26:52 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:34.283 10:26:52 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:34.283 10:26:52 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:34.283 10:26:52 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:34.283 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:34.283 ... 00:26:34.283 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:34.283 ... 00:26:34.283 fio-3.35 00:26:34.283 Starting 4 threads 00:26:34.283 [2024-11-19 10:26:52.754190] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:34.283 [2024-11-19 10:26:52.754259] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:38.470 00:26:38.470 filename0: (groupid=0, jobs=1): err= 0: pid=101987: Tue Nov 19 10:26:57 2024 00:26:38.470 read: IOPS=1923, BW=15.0MiB/s (15.8MB/s)(75.2MiB/5003msec) 00:26:38.470 slat (nsec): min=4872, max=43917, avg=12461.76, stdev=3822.00 00:26:38.470 clat (usec): min=1173, max=10591, avg=4100.78, stdev=558.82 00:26:38.470 lat (usec): min=1183, max=10603, avg=4113.24, stdev=558.61 00:26:38.470 clat percentiles (usec): 00:26:38.470 | 1.00th=[ 2212], 5.00th=[ 3916], 10.00th=[ 3949], 20.00th=[ 3949], 00:26:38.470 | 30.00th=[ 3982], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4015], 00:26:38.470 | 70.00th=[ 4047], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4948], 00:26:38.470 | 99.00th=[ 6259], 99.50th=[ 6521], 99.90th=[ 9110], 99.95th=[ 9634], 00:26:38.470 | 99.99th=[10552] 00:26:38.470 bw ( KiB/s): min=13568, max=15968, per=25.04%, avg=15356.44, stdev=778.01, samples=9 00:26:38.470 iops : min= 1696, max= 1996, avg=1919.56, stdev=97.25, samples=9 00:26:38.470 lat (msec) : 2=0.37%, 4=43.97%, 10=55.64%, 20=0.01% 00:26:38.470 cpu : usr=93.72%, sys=5.06%, ctx=7, majf=0, minf=9 00:26:38.470 IO depths : 1=10.3%, 2=24.8%, 4=50.2%, 8=14.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.470 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.470 issued rwts: total=9622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.470 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:38.470 filename0: (groupid=0, jobs=1): err= 0: pid=101988: Tue Nov 19 10:26:57 2024 00:26:38.470 read: IOPS=1913, BW=14.9MiB/s (15.7MB/s)(74.8MiB/5001msec) 00:26:38.470 slat (nsec): min=7645, max=54748, avg=13520.26, stdev=3992.10 00:26:38.470 clat (usec): min=2605, max=11135, avg=4123.67, stdev=470.80 00:26:38.470 lat (usec): min=2616, max=11153, avg=4137.19, stdev=470.71 00:26:38.470 clat percentiles (usec): 00:26:38.470 | 1.00th=[ 3884], 5.00th=[ 3916], 10.00th=[ 3949], 20.00th=[ 3982], 00:26:38.470 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4015], 60.00th=[ 4015], 00:26:38.470 | 70.00th=[ 4047], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4948], 00:26:38.470 | 99.00th=[ 6194], 99.50th=[ 6587], 99.90th=[ 8455], 99.95th=[11076], 00:26:38.470 | 99.99th=[11076] 00:26:38.470 bw ( KiB/s): min=13184, max=15744, per=24.88%, avg=15260.44, stdev=843.68, samples=9 00:26:38.470 iops : min= 1648, max= 1968, avg=1907.56, stdev=105.46, samples=9 00:26:38.470 lat (msec) : 4=41.74%, 10=58.18%, 20=0.07% 00:26:38.470 cpu : usr=94.16%, sys=4.62%, ctx=7, majf=0, minf=0 00:26:38.470 IO depths : 1=6.4%, 2=25.0%, 4=50.0%, 8=18.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.470 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.470 issued rwts: total=9568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.470 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:38.470 filename1: (groupid=0, jobs=1): err= 0: pid=101989: Tue Nov 19 10:26:57 2024 00:26:38.470 read: IOPS=1916, BW=15.0MiB/s (15.7MB/s)(74.9MiB/5002msec) 00:26:38.470 slat (nsec): min=7550, max=73204, avg=13396.96, stdev=4774.76 00:26:38.470 clat (usec): min=2682, max=11676, avg=4128.81, stdev=503.23 00:26:38.470 lat (usec): min=2690, max=11696, avg=4142.21, stdev=502.93 00:26:38.470 clat percentiles (usec): 00:26:38.470 | 1.00th=[ 3392], 5.00th=[ 3425], 10.00th=[ 3949], 20.00th=[ 3982], 00:26:38.470 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4015], 60.00th=[ 4047], 00:26:38.470 | 70.00th=[ 4047], 80.00th=[ 4113], 90.00th=[ 4621], 95.00th=[ 4948], 00:26:38.470 | 99.00th=[ 6063], 99.50th=[ 6390], 99.90th=[ 7570], 99.95th=[11076], 00:26:38.470 | 99.99th=[11731] 00:26:38.470 bw ( KiB/s): min=13312, max=15744, per=24.93%, avg=15285.44, stdev=806.16, samples=9 00:26:38.470 iops : min= 1664, max= 1968, avg=1910.67, stdev=100.77, samples=9 00:26:38.470 lat (msec) : 4=36.45%, 10=63.47%, 20=0.08% 00:26:38.470 cpu : usr=94.26%, sys=4.62%, ctx=6, majf=0, minf=9 00:26:38.470 IO depths : 1=1.9%, 2=4.4%, 4=70.6%, 8=23.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.470 complete : 0=0.0%, 4=89.8%, 8=10.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.470 issued rwts: total=9584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.470 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:38.470 filename1: (groupid=0, jobs=1): err= 0: pid=101990: Tue Nov 19 10:26:57 2024 00:26:38.470 read: IOPS=1914, BW=15.0MiB/s (15.7MB/s)(74.8MiB/5002msec) 00:26:38.470 slat (nsec): min=7590, max=49156, avg=14216.14, stdev=4713.23 00:26:38.470 clat (usec): min=1994, max=11131, avg=4107.20, stdev=514.64 00:26:38.470 lat (usec): min=2006, max=11147, avg=4121.42, stdev=514.71 00:26:38.470 clat percentiles (usec): 00:26:38.470 | 1.00th=[ 3425], 5.00th=[ 3916], 10.00th=[ 3949], 20.00th=[ 3949], 00:26:38.470 | 30.00th=[ 3982], 40.00th=[ 3982], 50.00th=[ 3982], 60.00th=[ 4015], 00:26:38.470 | 70.00th=[ 4015], 80.00th=[ 4047], 90.00th=[ 4228], 95.00th=[ 4948], 00:26:38.470 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 9241], 99.95th=[11076], 00:26:38.470 | 99.99th=[11076] 00:26:38.470 bw ( KiB/s): min=13184, max=15744, per=24.91%, avg=15274.67, stdev=846.68, samples=9 00:26:38.471 iops : min= 1648, max= 1968, avg=1909.33, stdev=105.83, samples=9 00:26:38.471 lat (msec) : 2=0.01%, 4=53.60%, 10=46.30%, 20=0.08% 00:26:38.471 cpu : usr=94.38%, sys=4.28%, ctx=8, majf=0, minf=0 00:26:38.471 IO depths : 1=8.9%, 2=24.9%, 4=50.1%, 8=16.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.471 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.471 issued rwts: total=9576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.471 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:38.471 00:26:38.471 Run status group 0 (all jobs): 00:26:38.471 READ: bw=59.9MiB/s (62.8MB/s), 14.9MiB/s-15.0MiB/s (15.7MB/s-15.8MB/s), io=300MiB (314MB), run=5001-5003msec 00:26:38.730 10:26:58 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:38.730 10:26:58 -- target/dif.sh@43 -- # local sub 00:26:38.730 10:26:58 -- target/dif.sh@45 -- # for sub in "$@" 00:26:38.730 10:26:58 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:38.730 10:26:58 -- target/dif.sh@36 -- # local sub_id=0 00:26:38.730 10:26:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:38.730 10:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.730 10:26:58 -- common/autotest_common.sh@10 -- # set +x 00:26:38.730 10:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.730 10:26:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:38.730 10:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.730 10:26:58 -- common/autotest_common.sh@10 -- # set +x 00:26:38.730 10:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.730 10:26:58 -- target/dif.sh@45 -- # for sub in "$@" 00:26:38.730 10:26:58 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:38.730 10:26:58 -- target/dif.sh@36 -- # local sub_id=1 00:26:38.730 10:26:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:38.730 10:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.730 10:26:58 -- common/autotest_common.sh@10 -- # set +x 00:26:38.730 10:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.730 10:26:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:38.730 10:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.730 10:26:58 -- common/autotest_common.sh@10 -- # set +x 00:26:38.730 10:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.730 00:26:38.730 real 0m23.297s 00:26:38.730 user 2m6.616s 00:26:38.730 sys 0m4.592s 00:26:38.730 10:26:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:38.730 ************************************ 00:26:38.730 END TEST fio_dif_rand_params 00:26:38.730 10:26:58 -- common/autotest_common.sh@10 -- # set +x 00:26:38.730 ************************************ 00:26:38.730 10:26:58 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:38.730 10:26:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:38.730 10:26:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:38.730 10:26:58 -- common/autotest_common.sh@10 -- # set +x 00:26:38.730 ************************************ 00:26:38.730 START TEST fio_dif_digest 00:26:38.730 ************************************ 00:26:38.730 10:26:58 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:26:38.730 10:26:58 -- target/dif.sh@123 -- # local NULL_DIF 00:26:38.730 10:26:58 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:38.730 10:26:58 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:38.730 10:26:58 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:38.730 10:26:58 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:38.730 10:26:58 -- target/dif.sh@127 -- # numjobs=3 00:26:38.730 10:26:58 -- target/dif.sh@127 -- # iodepth=3 00:26:38.730 10:26:58 -- target/dif.sh@127 -- # runtime=10 00:26:38.730 10:26:58 -- target/dif.sh@128 -- # hdgst=true 00:26:38.730 10:26:58 -- target/dif.sh@128 -- # ddgst=true 00:26:38.730 10:26:58 -- target/dif.sh@130 -- # create_subsystems 0 00:26:38.730 10:26:58 -- target/dif.sh@28 -- # local sub 00:26:38.730 10:26:58 -- target/dif.sh@30 -- # for sub in "$@" 00:26:38.730 10:26:58 -- target/dif.sh@31 -- # create_subsystem 0 00:26:38.730 10:26:58 -- target/dif.sh@18 -- # local sub_id=0 00:26:38.730 10:26:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:38.730 10:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.730 10:26:58 -- common/autotest_common.sh@10 -- # set +x 00:26:38.730 bdev_null0 00:26:38.730 10:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.730 10:26:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:38.730 10:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.730 10:26:58 -- common/autotest_common.sh@10 -- # set +x 00:26:38.730 10:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.730 10:26:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:38.730 10:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.730 10:26:58 -- common/autotest_common.sh@10 -- # set +x 00:26:38.730 10:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.730 10:26:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:38.730 10:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.730 10:26:58 -- common/autotest_common.sh@10 -- # set +x 00:26:38.730 [2024-11-19 10:26:58.139870] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.730 10:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.730 10:26:58 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:38.730 10:26:58 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:38.730 10:26:58 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:38.730 10:26:58 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.730 10:26:58 -- nvmf/common.sh@520 -- # config=() 00:26:38.730 10:26:58 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.730 10:26:58 -- target/dif.sh@82 -- # gen_fio_conf 00:26:38.730 10:26:58 -- nvmf/common.sh@520 -- # local subsystem config 00:26:38.730 10:26:58 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:26:38.730 10:26:58 -- target/dif.sh@54 -- # local file 00:26:38.730 10:26:58 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:38.730 10:26:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:38.730 10:26:58 -- target/dif.sh@56 -- # cat 00:26:38.730 10:26:58 -- common/autotest_common.sh@1328 -- # local sanitizers 00:26:38.730 10:26:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:38.730 { 00:26:38.730 "params": { 00:26:38.730 "name": "Nvme$subsystem", 00:26:38.730 "trtype": "$TEST_TRANSPORT", 00:26:38.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.730 "adrfam": "ipv4", 00:26:38.730 "trsvcid": "$NVMF_PORT", 00:26:38.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.730 "hdgst": ${hdgst:-false}, 00:26:38.730 "ddgst": ${ddgst:-false} 00:26:38.730 }, 00:26:38.730 "method": "bdev_nvme_attach_controller" 00:26:38.730 } 00:26:38.730 EOF 00:26:38.730 )") 00:26:38.730 10:26:58 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:38.730 10:26:58 -- common/autotest_common.sh@1330 -- # shift 00:26:38.730 10:26:58 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:26:38.730 10:26:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:38.730 10:26:58 -- nvmf/common.sh@542 -- # cat 00:26:38.730 10:26:58 -- common/autotest_common.sh@1334 -- # grep libasan 00:26:38.730 10:26:58 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:38.730 10:26:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:38.730 10:26:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:38.730 10:26:58 -- target/dif.sh@72 -- # (( file <= files )) 00:26:38.730 10:26:58 -- nvmf/common.sh@544 -- # jq . 00:26:38.731 10:26:58 -- nvmf/common.sh@545 -- # IFS=, 00:26:38.731 10:26:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:38.731 "params": { 00:26:38.731 "name": "Nvme0", 00:26:38.731 "trtype": "tcp", 00:26:38.731 "traddr": "10.0.0.2", 00:26:38.731 "adrfam": "ipv4", 00:26:38.731 "trsvcid": "4420", 00:26:38.731 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:38.731 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:38.731 "hdgst": true, 00:26:38.731 "ddgst": true 00:26:38.731 }, 00:26:38.731 "method": "bdev_nvme_attach_controller" 00:26:38.731 }' 00:26:38.731 10:26:58 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:38.731 10:26:58 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:38.731 10:26:58 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:26:38.731 10:26:58 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:38.731 10:26:58 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:26:38.731 10:26:58 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:26:38.731 10:26:58 -- common/autotest_common.sh@1334 -- # asan_lib= 00:26:38.731 10:26:58 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:26:38.731 10:26:58 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:38.731 10:26:58 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.989 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:38.989 ... 00:26:38.989 fio-3.35 00:26:38.989 Starting 3 threads 00:26:39.248 [2024-11-19 10:26:58.665900] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:39.248 [2024-11-19 10:26:58.665971] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:51.525 00:26:51.525 filename0: (groupid=0, jobs=1): err= 0: pid=102092: Tue Nov 19 10:27:08 2024 00:26:51.525 read: IOPS=167, BW=21.0MiB/s (22.0MB/s)(211MiB/10045msec) 00:26:51.525 slat (nsec): min=8191, max=44949, avg=12688.56, stdev=2726.30 00:26:51.525 clat (usec): min=9855, max=52621, avg=17843.37, stdev=1650.98 00:26:51.525 lat (usec): min=9867, max=52632, avg=17856.06, stdev=1651.22 00:26:51.525 clat percentiles (usec): 00:26:51.525 | 1.00th=[12125], 5.00th=[16057], 10.00th=[16450], 20.00th=[16909], 00:26:51.525 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18220], 00:26:51.525 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19006], 95.00th=[19530], 00:26:51.525 | 99.00th=[20055], 99.50th=[20317], 99.90th=[47973], 99.95th=[52691], 00:26:51.525 | 99.99th=[52691] 00:26:51.525 bw ( KiB/s): min=20992, max=22272, per=27.52%, avg=21598.32, stdev=445.13, samples=19 00:26:51.525 iops : min= 164, max= 174, avg=168.74, stdev= 3.48, samples=19 00:26:51.525 lat (msec) : 10=0.06%, 20=98.99%, 50=0.89%, 100=0.06% 00:26:51.525 cpu : usr=94.41%, sys=4.49%, ctx=6, majf=0, minf=11 00:26:51.525 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:51.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.525 issued rwts: total=1685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.525 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:51.525 filename0: (groupid=0, jobs=1): err= 0: pid=102093: Tue Nov 19 10:27:08 2024 00:26:51.525 read: IOPS=236, BW=29.6MiB/s (31.0MB/s)(296MiB/10006msec) 00:26:51.525 slat (nsec): min=8015, max=51771, avg=12893.28, stdev=2940.71 00:26:51.525 clat (usec): min=7648, max=56076, avg=12653.22, stdev=2233.46 00:26:51.525 lat (usec): min=7660, max=56090, avg=12666.11, stdev=2233.49 00:26:51.525 clat percentiles (usec): 00:26:51.525 | 1.00th=[10683], 5.00th=[11338], 10.00th=[11600], 20.00th=[11994], 00:26:51.525 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12649], 60.00th=[12780], 00:26:51.525 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13435], 95.00th=[13698], 00:26:51.525 | 99.00th=[14877], 99.50th=[16319], 99.90th=[53740], 99.95th=[55837], 00:26:51.525 | 99.99th=[55837] 00:26:51.525 bw ( KiB/s): min=27648, max=31488, per=38.67%, avg=30356.21, stdev=936.01, samples=19 00:26:51.525 iops : min= 216, max= 246, avg=237.16, stdev= 7.31, samples=19 00:26:51.525 lat (msec) : 10=0.21%, 20=99.54%, 100=0.25% 00:26:51.525 cpu : usr=93.35%, sys=5.29%, ctx=11, majf=0, minf=9 00:26:51.525 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:51.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.525 issued rwts: total=2369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.525 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:51.525 filename0: (groupid=0, jobs=1): err= 0: pid=102094: Tue Nov 19 10:27:08 2024 00:26:51.525 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(263MiB/10005msec) 00:26:51.525 slat (nsec): min=7577, max=70208, avg=13403.28, stdev=4449.70 00:26:51.525 clat (usec): min=7958, max=19519, avg=14230.86, stdev=1247.46 00:26:51.525 lat (usec): min=7968, max=19536, avg=14244.27, stdev=1247.97 00:26:51.525 clat percentiles (usec): 00:26:51.525 | 1.00th=[ 9110], 5.00th=[12518], 10.00th=[12911], 20.00th=[13304], 00:26:51.525 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:26:51.525 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15795], 95.00th=[16188], 00:26:51.525 | 99.00th=[17433], 99.50th=[17957], 99.90th=[19006], 99.95th=[19268], 00:26:51.525 | 99.99th=[19530] 00:26:51.525 bw ( KiB/s): min=24576, max=28416, per=34.28%, avg=26906.95, stdev=910.69, samples=19 00:26:51.525 iops : min= 192, max= 222, avg=210.21, stdev= 7.11, samples=19 00:26:51.525 lat (msec) : 10=1.28%, 20=98.72% 00:26:51.525 cpu : usr=93.44%, sys=5.13%, ctx=248, majf=0, minf=9 00:26:51.525 IO depths : 1=3.9%, 2=96.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:51.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:51.525 issued rwts: total=2106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:51.525 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:51.525 00:26:51.525 Run status group 0 (all jobs): 00:26:51.525 READ: bw=76.7MiB/s (80.4MB/s), 21.0MiB/s-29.6MiB/s (22.0MB/s-31.0MB/s), io=770MiB (807MB), run=10005-10045msec 00:26:51.525 10:27:08 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:51.525 10:27:08 -- target/dif.sh@43 -- # local sub 00:26:51.525 10:27:08 -- target/dif.sh@45 -- # for sub in "$@" 00:26:51.525 10:27:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:51.525 10:27:08 -- target/dif.sh@36 -- # local sub_id=0 00:26:51.525 10:27:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:51.525 10:27:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.525 10:27:08 -- common/autotest_common.sh@10 -- # set +x 00:26:51.525 10:27:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.525 10:27:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:51.525 10:27:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.525 10:27:08 -- common/autotest_common.sh@10 -- # set +x 00:26:51.525 10:27:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.525 00:26:51.525 real 0m10.898s 00:26:51.525 user 0m28.754s 00:26:51.525 sys 0m1.711s 00:26:51.525 10:27:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:51.525 10:27:09 -- common/autotest_common.sh@10 -- # set +x 00:26:51.525 ************************************ 00:26:51.525 END TEST fio_dif_digest 00:26:51.525 ************************************ 00:26:51.525 10:27:09 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:51.525 10:27:09 -- target/dif.sh@147 -- # nvmftestfini 00:26:51.525 10:27:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:51.525 10:27:09 -- nvmf/common.sh@116 -- # sync 00:26:51.525 10:27:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:51.525 10:27:09 -- nvmf/common.sh@119 -- # set +e 00:26:51.525 10:27:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:51.525 10:27:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:51.525 rmmod nvme_tcp 00:26:51.525 rmmod nvme_fabrics 00:26:51.525 rmmod nvme_keyring 00:26:51.525 10:27:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:51.525 10:27:09 -- nvmf/common.sh@123 -- # set -e 00:26:51.525 10:27:09 -- nvmf/common.sh@124 -- # return 0 00:26:51.525 10:27:09 -- nvmf/common.sh@477 -- # '[' -n 101335 ']' 00:26:51.526 10:27:09 -- nvmf/common.sh@478 -- # killprocess 101335 00:26:51.526 10:27:09 -- common/autotest_common.sh@936 -- # '[' -z 101335 ']' 00:26:51.526 10:27:09 -- common/autotest_common.sh@940 -- # kill -0 101335 00:26:51.526 10:27:09 -- common/autotest_common.sh@941 -- # uname 00:26:51.526 10:27:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:51.526 10:27:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 101335 00:26:51.526 10:27:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:51.526 10:27:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:51.526 killing process with pid 101335 00:26:51.526 10:27:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 101335' 00:26:51.526 10:27:09 -- common/autotest_common.sh@955 -- # kill 101335 00:26:51.526 10:27:09 -- common/autotest_common.sh@960 -- # wait 101335 00:26:51.526 10:27:09 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:51.526 10:27:09 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:51.526 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:51.526 Waiting for block devices as requested 00:26:51.526 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:51.526 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:51.526 10:27:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:51.526 10:27:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:51.526 10:27:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:51.526 10:27:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:51.526 10:27:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.526 10:27:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:51.526 10:27:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.526 10:27:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:51.526 00:26:51.526 real 0m59.249s 00:26:51.526 user 3m50.955s 00:26:51.526 sys 0m14.209s 00:26:51.526 10:27:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:51.526 10:27:09 -- common/autotest_common.sh@10 -- # set +x 00:26:51.526 ************************************ 00:26:51.526 END TEST nvmf_dif 00:26:51.526 ************************************ 00:26:51.526 10:27:09 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:51.526 10:27:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:51.526 10:27:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:51.526 10:27:09 -- common/autotest_common.sh@10 -- # set +x 00:26:51.526 ************************************ 00:26:51.526 START TEST nvmf_abort_qd_sizes 00:26:51.526 ************************************ 00:26:51.526 10:27:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:51.526 * Looking for test storage... 00:26:51.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:51.526 10:27:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:51.526 10:27:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:51.526 10:27:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:51.526 10:27:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:51.526 10:27:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:51.526 10:27:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:51.526 10:27:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:51.526 10:27:10 -- scripts/common.sh@335 -- # IFS=.-: 00:26:51.526 10:27:10 -- scripts/common.sh@335 -- # read -ra ver1 00:26:51.526 10:27:10 -- scripts/common.sh@336 -- # IFS=.-: 00:26:51.526 10:27:10 -- scripts/common.sh@336 -- # read -ra ver2 00:26:51.526 10:27:10 -- scripts/common.sh@337 -- # local 'op=<' 00:26:51.526 10:27:10 -- scripts/common.sh@339 -- # ver1_l=2 00:26:51.526 10:27:10 -- scripts/common.sh@340 -- # ver2_l=1 00:26:51.526 10:27:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:51.526 10:27:10 -- scripts/common.sh@343 -- # case "$op" in 00:26:51.526 10:27:10 -- scripts/common.sh@344 -- # : 1 00:26:51.526 10:27:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:51.526 10:27:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:51.526 10:27:10 -- scripts/common.sh@364 -- # decimal 1 00:26:51.526 10:27:10 -- scripts/common.sh@352 -- # local d=1 00:26:51.526 10:27:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:51.526 10:27:10 -- scripts/common.sh@354 -- # echo 1 00:26:51.526 10:27:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:51.526 10:27:10 -- scripts/common.sh@365 -- # decimal 2 00:26:51.526 10:27:10 -- scripts/common.sh@352 -- # local d=2 00:26:51.526 10:27:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:51.526 10:27:10 -- scripts/common.sh@354 -- # echo 2 00:26:51.526 10:27:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:51.526 10:27:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:51.526 10:27:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:51.526 10:27:10 -- scripts/common.sh@367 -- # return 0 00:26:51.526 10:27:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:51.526 10:27:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:51.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.526 --rc genhtml_branch_coverage=1 00:26:51.526 --rc genhtml_function_coverage=1 00:26:51.526 --rc genhtml_legend=1 00:26:51.526 --rc geninfo_all_blocks=1 00:26:51.526 --rc geninfo_unexecuted_blocks=1 00:26:51.526 00:26:51.526 ' 00:26:51.526 10:27:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:51.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.526 --rc genhtml_branch_coverage=1 00:26:51.526 --rc genhtml_function_coverage=1 00:26:51.526 --rc genhtml_legend=1 00:26:51.526 --rc geninfo_all_blocks=1 00:26:51.526 --rc geninfo_unexecuted_blocks=1 00:26:51.526 00:26:51.526 ' 00:26:51.526 10:27:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:51.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.526 --rc genhtml_branch_coverage=1 00:26:51.526 --rc genhtml_function_coverage=1 00:26:51.526 --rc genhtml_legend=1 00:26:51.526 --rc geninfo_all_blocks=1 00:26:51.526 --rc geninfo_unexecuted_blocks=1 00:26:51.526 00:26:51.526 ' 00:26:51.526 10:27:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:51.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.526 --rc genhtml_branch_coverage=1 00:26:51.526 --rc genhtml_function_coverage=1 00:26:51.526 --rc genhtml_legend=1 00:26:51.526 --rc geninfo_all_blocks=1 00:26:51.526 --rc geninfo_unexecuted_blocks=1 00:26:51.526 00:26:51.526 ' 00:26:51.526 10:27:10 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:51.526 10:27:10 -- nvmf/common.sh@7 -- # uname -s 00:26:51.526 10:27:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.526 10:27:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.526 10:27:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.526 10:27:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.526 10:27:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.526 10:27:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.526 10:27:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.526 10:27:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.526 10:27:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.526 10:27:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.526 10:27:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a 00:26:51.526 10:27:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=71696525-119b-4582-ab28-8c254b64780a 00:26:51.526 10:27:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.526 10:27:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.526 10:27:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:51.526 10:27:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:51.526 10:27:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.526 10:27:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.527 10:27:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.527 10:27:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.527 10:27:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.527 10:27:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.527 10:27:10 -- paths/export.sh@5 -- # export PATH 00:26:51.527 10:27:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.527 10:27:10 -- nvmf/common.sh@46 -- # : 0 00:26:51.527 10:27:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:51.527 10:27:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:51.527 10:27:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:51.527 10:27:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.527 10:27:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.527 10:27:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:51.527 10:27:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:51.527 10:27:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:51.527 10:27:10 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:51.527 10:27:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:51.527 10:27:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.527 10:27:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:51.527 10:27:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:51.527 10:27:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:51.527 10:27:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.527 10:27:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:51.527 10:27:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.527 10:27:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:51.527 10:27:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:51.527 10:27:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:51.527 10:27:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:51.527 10:27:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:51.527 10:27:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:51.527 10:27:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.527 10:27:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.527 10:27:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:51.527 10:27:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:51.527 10:27:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:51.527 10:27:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:51.527 10:27:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:51.527 10:27:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.527 10:27:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:51.527 10:27:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:51.527 10:27:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:51.527 10:27:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:51.527 10:27:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:51.527 10:27:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:51.527 Cannot find device "nvmf_tgt_br" 00:26:51.527 10:27:10 -- nvmf/common.sh@154 -- # true 00:26:51.527 10:27:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:51.527 Cannot find device "nvmf_tgt_br2" 00:26:51.527 10:27:10 -- nvmf/common.sh@155 -- # true 00:26:51.527 10:27:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:51.527 10:27:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:51.527 Cannot find device "nvmf_tgt_br" 00:26:51.527 10:27:10 -- nvmf/common.sh@157 -- # true 00:26:51.527 10:27:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:51.527 Cannot find device "nvmf_tgt_br2" 00:26:51.527 10:27:10 -- nvmf/common.sh@158 -- # true 00:26:51.527 10:27:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:51.527 10:27:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:51.527 10:27:10 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:51.527 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:51.527 10:27:10 -- nvmf/common.sh@161 -- # true 00:26:51.527 10:27:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:51.527 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:51.527 10:27:10 -- nvmf/common.sh@162 -- # true 00:26:51.527 10:27:10 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:51.527 10:27:10 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:51.527 10:27:10 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:51.527 10:27:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:51.527 10:27:10 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:51.527 10:27:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:51.527 10:27:10 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:51.527 10:27:10 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:51.527 10:27:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:51.527 10:27:10 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:51.527 10:27:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:51.527 10:27:10 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:51.527 10:27:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:51.527 10:27:10 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:51.527 10:27:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:51.527 10:27:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:51.527 10:27:10 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:51.527 10:27:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:51.527 10:27:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:51.527 10:27:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:51.527 10:27:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:51.527 10:27:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:51.527 10:27:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:51.527 10:27:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:51.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:26:51.527 00:26:51.527 --- 10.0.0.2 ping statistics --- 00:26:51.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.527 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:26:51.527 10:27:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:51.527 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:51.527 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:26:51.527 00:26:51.527 --- 10.0.0.3 ping statistics --- 00:26:51.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.527 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:26:51.527 10:27:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:51.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:26:51.527 00:26:51.527 --- 10.0.0.1 ping statistics --- 00:26:51.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.527 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:26:51.527 10:27:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.527 10:27:10 -- nvmf/common.sh@421 -- # return 0 00:26:51.527 10:27:10 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:51.527 10:27:10 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:51.786 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:51.786 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:51.786 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:51.786 10:27:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.786 10:27:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:51.786 10:27:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:51.786 10:27:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.786 10:27:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:51.786 10:27:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:52.046 10:27:11 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:52.046 10:27:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:52.046 10:27:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:52.046 10:27:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.046 10:27:11 -- nvmf/common.sh@469 -- # nvmfpid=102690 00:26:52.046 10:27:11 -- nvmf/common.sh@470 -- # waitforlisten 102690 00:26:52.046 10:27:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:52.046 10:27:11 -- common/autotest_common.sh@829 -- # '[' -z 102690 ']' 00:26:52.046 10:27:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.046 10:27:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:52.046 10:27:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.046 10:27:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:52.046 10:27:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.046 [2024-11-19 10:27:11.397712] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:26:52.046 [2024-11-19 10:27:11.397812] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.046 [2024-11-19 10:27:11.533100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:52.046 [2024-11-19 10:27:11.568596] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:52.046 [2024-11-19 10:27:11.568754] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:52.046 [2024-11-19 10:27:11.568766] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:52.046 [2024-11-19 10:27:11.568775] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:52.046 [2024-11-19 10:27:11.568940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.046 [2024-11-19 10:27:11.569094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:52.046 [2024-11-19 10:27:11.569565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:52.046 [2024-11-19 10:27:11.569597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.305 10:27:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:52.305 10:27:11 -- common/autotest_common.sh@862 -- # return 0 00:26:52.305 10:27:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:52.305 10:27:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:52.305 10:27:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.305 10:27:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:52.305 10:27:11 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:52.305 10:27:11 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:52.305 10:27:11 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:52.305 10:27:11 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:52.305 10:27:11 -- scripts/common.sh@312 -- # local nvmes 00:26:52.305 10:27:11 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:52.305 10:27:11 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:52.305 10:27:11 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:52.305 10:27:11 -- scripts/common.sh@297 -- # local bdf= 00:26:52.305 10:27:11 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:52.305 10:27:11 -- scripts/common.sh@232 -- # local class 00:26:52.305 10:27:11 -- scripts/common.sh@233 -- # local subclass 00:26:52.305 10:27:11 -- scripts/common.sh@234 -- # local progif 00:26:52.305 10:27:11 -- scripts/common.sh@235 -- # printf %02x 1 00:26:52.305 10:27:11 -- scripts/common.sh@235 -- # class=01 00:26:52.305 10:27:11 -- scripts/common.sh@236 -- # printf %02x 8 00:26:52.305 10:27:11 -- scripts/common.sh@236 -- # subclass=08 00:26:52.305 10:27:11 -- scripts/common.sh@237 -- # printf %02x 2 00:26:52.305 10:27:11 -- scripts/common.sh@237 -- # progif=02 00:26:52.305 10:27:11 -- scripts/common.sh@239 -- # hash lspci 00:26:52.305 10:27:11 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:52.305 10:27:11 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:52.305 10:27:11 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:52.305 10:27:11 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:52.305 10:27:11 -- scripts/common.sh@244 -- # tr -d '"' 00:26:52.305 10:27:11 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:52.305 10:27:11 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:52.305 10:27:11 -- scripts/common.sh@15 -- # local i 00:26:52.305 10:27:11 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:52.306 10:27:11 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:52.306 10:27:11 -- scripts/common.sh@24 -- # return 0 00:26:52.306 10:27:11 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:52.306 10:27:11 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:52.306 10:27:11 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:52.306 10:27:11 -- scripts/common.sh@15 -- # local i 00:26:52.306 10:27:11 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:52.306 10:27:11 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:52.306 10:27:11 -- scripts/common.sh@24 -- # return 0 00:26:52.306 10:27:11 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:52.306 10:27:11 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:52.306 10:27:11 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:52.306 10:27:11 -- scripts/common.sh@322 -- # uname -s 00:26:52.306 10:27:11 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:52.306 10:27:11 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:52.306 10:27:11 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:52.306 10:27:11 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:52.306 10:27:11 -- scripts/common.sh@322 -- # uname -s 00:26:52.306 10:27:11 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:52.306 10:27:11 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:52.306 10:27:11 -- scripts/common.sh@327 -- # (( 2 )) 00:26:52.306 10:27:11 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:52.306 10:27:11 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:52.306 10:27:11 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:52.306 10:27:11 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:52.306 10:27:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:52.306 10:27:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:52.306 10:27:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.306 ************************************ 00:26:52.306 START TEST spdk_target_abort 00:26:52.306 ************************************ 00:26:52.306 10:27:11 -- common/autotest_common.sh@1114 -- # spdk_target 00:26:52.306 10:27:11 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:52.306 10:27:11 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:52.306 10:27:11 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:52.306 10:27:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.306 10:27:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.306 spdk_targetn1 00:26:52.306 10:27:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.306 10:27:11 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.306 10:27:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.306 10:27:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.306 [2024-11-19 10:27:11.836954] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.306 10:27:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.306 10:27:11 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:52.306 10:27:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.306 10:27:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.564 10:27:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.564 10:27:11 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:52.564 10:27:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.564 10:27:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.564 10:27:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.564 10:27:11 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:52.564 10:27:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.564 10:27:11 -- common/autotest_common.sh@10 -- # set +x 00:26:52.564 [2024-11-19 10:27:11.865135] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.564 10:27:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:52.565 10:27:11 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:55.848 Initializing NVMe Controllers 00:26:55.848 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:55.848 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:55.848 Initialization complete. Launching workers. 00:26:55.848 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 11305, failed: 0 00:26:55.848 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1033, failed to submit 10272 00:26:55.848 success 766, unsuccess 267, failed 0 00:26:55.848 10:27:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:55.848 10:27:15 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:59.189 [2024-11-19 10:27:18.352918] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2caf0 is same with the state(5) to be set 00:26:59.189 [2024-11-19 10:27:18.352981] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2caf0 is same with the state(5) to be set 00:26:59.189 [2024-11-19 10:27:18.352993] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2caf0 is same with the state(5) to be set 00:26:59.189 [2024-11-19 10:27:18.353001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2caf0 is same with the state(5) to be set 00:26:59.189 [2024-11-19 10:27:18.353010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2caf0 is same with the state(5) to be set 00:26:59.189 [2024-11-19 10:27:18.353019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2caf0 is same with the state(5) to be set 00:26:59.189 Initializing NVMe Controllers 00:26:59.189 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:59.189 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:59.189 Initialization complete. Launching workers. 00:26:59.189 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5899, failed: 0 00:26:59.189 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1226, failed to submit 4673 00:26:59.189 success 246, unsuccess 980, failed 0 00:26:59.189 10:27:18 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:59.189 10:27:18 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:27:02.474 Initializing NVMe Controllers 00:27:02.474 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:27:02.474 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:27:02.474 Initialization complete. Launching workers. 00:27:02.474 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 30617, failed: 0 00:27:02.474 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2649, failed to submit 27968 00:27:02.474 success 488, unsuccess 2161, failed 0 00:27:02.474 10:27:21 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:27:02.474 10:27:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.474 10:27:21 -- common/autotest_common.sh@10 -- # set +x 00:27:02.474 10:27:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.474 10:27:21 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:02.474 10:27:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.474 10:27:21 -- common/autotest_common.sh@10 -- # set +x 00:27:02.732 10:27:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.732 10:27:22 -- target/abort_qd_sizes.sh@62 -- # killprocess 102690 00:27:02.732 10:27:22 -- common/autotest_common.sh@936 -- # '[' -z 102690 ']' 00:27:02.733 10:27:22 -- common/autotest_common.sh@940 -- # kill -0 102690 00:27:02.733 10:27:22 -- common/autotest_common.sh@941 -- # uname 00:27:02.733 10:27:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:02.733 10:27:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 102690 00:27:02.733 killing process with pid 102690 00:27:02.733 10:27:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:02.733 10:27:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:02.733 10:27:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 102690' 00:27:02.733 10:27:22 -- common/autotest_common.sh@955 -- # kill 102690 00:27:02.733 10:27:22 -- common/autotest_common.sh@960 -- # wait 102690 00:27:02.991 ************************************ 00:27:02.991 END TEST spdk_target_abort 00:27:02.991 ************************************ 00:27:02.991 00:27:02.991 real 0m10.641s 00:27:02.991 user 0m40.645s 00:27:02.991 sys 0m1.653s 00:27:02.991 10:27:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:02.991 10:27:22 -- common/autotest_common.sh@10 -- # set +x 00:27:02.991 10:27:22 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:27:02.991 10:27:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:02.991 10:27:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:02.991 10:27:22 -- common/autotest_common.sh@10 -- # set +x 00:27:02.991 ************************************ 00:27:02.991 START TEST kernel_target_abort 00:27:02.991 ************************************ 00:27:02.991 10:27:22 -- common/autotest_common.sh@1114 -- # kernel_target 00:27:02.991 10:27:22 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:27:02.991 10:27:22 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:27:02.991 10:27:22 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:27:02.991 10:27:22 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:27:02.991 10:27:22 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:27:02.991 10:27:22 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:02.991 10:27:22 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:02.991 10:27:22 -- nvmf/common.sh@627 -- # local block nvme 00:27:02.991 10:27:22 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:27:02.991 10:27:22 -- nvmf/common.sh@630 -- # modprobe nvmet 00:27:02.991 10:27:22 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:02.991 10:27:22 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:03.559 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:03.559 Waiting for block devices as requested 00:27:03.559 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:03.559 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:27:03.559 10:27:23 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:03.559 10:27:23 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:03.559 10:27:23 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:27:03.559 10:27:23 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:27:03.559 10:27:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:03.559 No valid GPT data, bailing 00:27:03.559 10:27:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:03.818 10:27:23 -- scripts/common.sh@393 -- # pt= 00:27:03.818 10:27:23 -- scripts/common.sh@394 -- # return 1 00:27:03.818 10:27:23 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:27:03.818 10:27:23 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:03.818 10:27:23 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:03.818 10:27:23 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:27:03.818 10:27:23 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:27:03.818 10:27:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:03.818 No valid GPT data, bailing 00:27:03.818 10:27:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:03.818 10:27:23 -- scripts/common.sh@393 -- # pt= 00:27:03.818 10:27:23 -- scripts/common.sh@394 -- # return 1 00:27:03.818 10:27:23 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:27:03.818 10:27:23 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:03.818 10:27:23 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:27:03.818 10:27:23 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:27:03.818 10:27:23 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:27:03.818 10:27:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:27:03.818 No valid GPT data, bailing 00:27:03.818 10:27:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:27:03.818 10:27:23 -- scripts/common.sh@393 -- # pt= 00:27:03.818 10:27:23 -- scripts/common.sh@394 -- # return 1 00:27:03.818 10:27:23 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:27:03.818 10:27:23 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:27:03.818 10:27:23 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:27:03.818 10:27:23 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:27:03.818 10:27:23 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:27:03.818 10:27:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:27:03.818 No valid GPT data, bailing 00:27:03.818 10:27:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:27:03.818 10:27:23 -- scripts/common.sh@393 -- # pt= 00:27:03.818 10:27:23 -- scripts/common.sh@394 -- # return 1 00:27:03.818 10:27:23 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:27:03.818 10:27:23 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:27:03.818 10:27:23 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:03.818 10:27:23 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:03.818 10:27:23 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:03.818 10:27:23 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:27:03.818 10:27:23 -- nvmf/common.sh@654 -- # echo 1 00:27:03.818 10:27:23 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:27:03.818 10:27:23 -- nvmf/common.sh@656 -- # echo 1 00:27:03.818 10:27:23 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:27:03.818 10:27:23 -- nvmf/common.sh@663 -- # echo tcp 00:27:03.818 10:27:23 -- nvmf/common.sh@664 -- # echo 4420 00:27:03.818 10:27:23 -- nvmf/common.sh@665 -- # echo ipv4 00:27:03.818 10:27:23 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:04.077 10:27:23 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:71696525-119b-4582-ab28-8c254b64780a --hostid=71696525-119b-4582-ab28-8c254b64780a -a 10.0.0.1 -t tcp -s 4420 00:27:04.077 00:27:04.077 Discovery Log Number of Records 2, Generation counter 2 00:27:04.077 =====Discovery Log Entry 0====== 00:27:04.077 trtype: tcp 00:27:04.077 adrfam: ipv4 00:27:04.077 subtype: current discovery subsystem 00:27:04.077 treq: not specified, sq flow control disable supported 00:27:04.077 portid: 1 00:27:04.077 trsvcid: 4420 00:27:04.077 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:04.077 traddr: 10.0.0.1 00:27:04.077 eflags: none 00:27:04.077 sectype: none 00:27:04.077 =====Discovery Log Entry 1====== 00:27:04.077 trtype: tcp 00:27:04.077 adrfam: ipv4 00:27:04.077 subtype: nvme subsystem 00:27:04.077 treq: not specified, sq flow control disable supported 00:27:04.077 portid: 1 00:27:04.077 trsvcid: 4420 00:27:04.077 subnqn: kernel_target 00:27:04.077 traddr: 10.0.0.1 00:27:04.077 eflags: none 00:27:04.077 sectype: none 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:04.077 10:27:23 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:07.363 Initializing NVMe Controllers 00:27:07.363 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:07.363 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:07.363 Initialization complete. Launching workers. 00:27:07.363 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 31191, failed: 0 00:27:07.363 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31191, failed to submit 0 00:27:07.363 success 0, unsuccess 31191, failed 0 00:27:07.363 10:27:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:07.363 10:27:26 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:10.650 Initializing NVMe Controllers 00:27:10.650 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:10.650 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:10.650 Initialization complete. Launching workers. 00:27:10.650 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 68026, failed: 0 00:27:10.650 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 29502, failed to submit 38524 00:27:10.650 success 0, unsuccess 29502, failed 0 00:27:10.650 10:27:29 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:10.650 10:27:29 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:27:13.935 Initializing NVMe Controllers 00:27:13.935 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:27:13.935 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:27:13.935 Initialization complete. Launching workers. 00:27:13.935 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 81232, failed: 0 00:27:13.935 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 20286, failed to submit 60946 00:27:13.935 success 0, unsuccess 20286, failed 0 00:27:13.935 10:27:32 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:27:13.935 10:27:32 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:27:13.935 10:27:32 -- nvmf/common.sh@677 -- # echo 0 00:27:13.935 10:27:32 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:27:13.935 10:27:32 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:27:13.935 10:27:32 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:13.935 10:27:32 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:27:13.935 10:27:32 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:27:13.935 10:27:32 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:27:13.935 ************************************ 00:27:13.935 END TEST kernel_target_abort 00:27:13.935 ************************************ 00:27:13.935 00:27:13.935 real 0m10.510s 00:27:13.935 user 0m5.780s 00:27:13.935 sys 0m2.189s 00:27:13.935 10:27:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:13.935 10:27:32 -- common/autotest_common.sh@10 -- # set +x 00:27:13.935 10:27:32 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:27:13.935 10:27:32 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:27:13.935 10:27:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:13.935 10:27:32 -- nvmf/common.sh@116 -- # sync 00:27:13.935 10:27:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:13.935 10:27:33 -- nvmf/common.sh@119 -- # set +e 00:27:13.935 10:27:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:13.935 10:27:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:13.935 rmmod nvme_tcp 00:27:13.935 rmmod nvme_fabrics 00:27:13.935 rmmod nvme_keyring 00:27:13.935 10:27:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:13.935 10:27:33 -- nvmf/common.sh@123 -- # set -e 00:27:13.935 10:27:33 -- nvmf/common.sh@124 -- # return 0 00:27:13.935 10:27:33 -- nvmf/common.sh@477 -- # '[' -n 102690 ']' 00:27:13.935 10:27:33 -- nvmf/common.sh@478 -- # killprocess 102690 00:27:13.935 10:27:33 -- common/autotest_common.sh@936 -- # '[' -z 102690 ']' 00:27:13.935 10:27:33 -- common/autotest_common.sh@940 -- # kill -0 102690 00:27:13.935 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (102690) - No such process 00:27:13.935 Process with pid 102690 is not found 00:27:13.935 10:27:33 -- common/autotest_common.sh@963 -- # echo 'Process with pid 102690 is not found' 00:27:13.935 10:27:33 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:27:13.935 10:27:33 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:14.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:14.192 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:14.450 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:14.450 10:27:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:14.450 10:27:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:14.450 10:27:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:14.450 10:27:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:14.450 10:27:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.450 10:27:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:14.450 10:27:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.450 10:27:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:27:14.450 00:27:14.450 real 0m23.857s 00:27:14.450 user 0m47.643s 00:27:14.450 sys 0m5.104s 00:27:14.450 10:27:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:14.450 10:27:33 -- common/autotest_common.sh@10 -- # set +x 00:27:14.450 ************************************ 00:27:14.450 END TEST nvmf_abort_qd_sizes 00:27:14.450 ************************************ 00:27:14.450 10:27:33 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:27:14.450 10:27:33 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:27:14.450 10:27:33 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:27:14.450 10:27:33 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:27:14.450 10:27:33 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:14.450 10:27:33 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:14.450 10:27:33 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:27:14.450 10:27:33 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:14.450 10:27:33 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:14.450 10:27:33 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:27:14.450 10:27:33 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:27:14.450 10:27:33 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:27:14.450 10:27:33 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:27:14.450 10:27:33 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:14.450 10:27:33 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:14.450 10:27:33 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:27:14.450 10:27:33 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:27:14.450 10:27:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:14.450 10:27:33 -- common/autotest_common.sh@10 -- # set +x 00:27:14.450 10:27:33 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:27:14.450 10:27:33 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:27:14.450 10:27:33 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:27:14.450 10:27:33 -- common/autotest_common.sh@10 -- # set +x 00:27:16.353 INFO: APP EXITING 00:27:16.353 INFO: killing all VMs 00:27:16.353 INFO: killing vhost app 00:27:16.353 INFO: EXIT DONE 00:27:16.612 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:16.612 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:27:16.870 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:27:17.438 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:17.438 Cleaning 00:27:17.438 Removing: /var/run/dpdk/spdk0/config 00:27:17.438 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:17.438 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:17.438 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:17.438 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:17.438 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:17.438 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:17.438 Removing: /var/run/dpdk/spdk1/config 00:27:17.438 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:17.438 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:17.438 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:17.438 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:17.438 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:17.438 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:17.438 Removing: /var/run/dpdk/spdk2/config 00:27:17.438 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:17.438 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:17.438 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:17.438 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:17.438 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:17.438 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:17.438 Removing: /var/run/dpdk/spdk3/config 00:27:17.438 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:17.438 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:17.438 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:17.438 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:17.438 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:17.438 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:17.438 Removing: /var/run/dpdk/spdk4/config 00:27:17.438 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:17.438 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:17.438 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:17.438 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:17.438 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:17.438 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:17.438 Removing: /dev/shm/nvmf_trace.0 00:27:17.438 Removing: /dev/shm/spdk_tgt_trace.pid67346 00:27:17.438 Removing: /var/run/dpdk/spdk0 00:27:17.438 Removing: /var/run/dpdk/spdk1 00:27:17.697 Removing: /var/run/dpdk/spdk2 00:27:17.697 Removing: /var/run/dpdk/spdk3 00:27:17.697 Removing: /var/run/dpdk/spdk4 00:27:17.697 Removing: /var/run/dpdk/spdk_pid100222 00:27:17.697 Removing: /var/run/dpdk/spdk_pid100509 00:27:17.697 Removing: /var/run/dpdk/spdk_pid101037 00:27:17.697 Removing: /var/run/dpdk/spdk_pid101042 00:27:17.697 Removing: /var/run/dpdk/spdk_pid101410 00:27:17.697 Removing: /var/run/dpdk/spdk_pid101570 00:27:17.697 Removing: /var/run/dpdk/spdk_pid101721 00:27:17.697 Removing: /var/run/dpdk/spdk_pid101818 00:27:17.697 Removing: /var/run/dpdk/spdk_pid101973 00:27:17.697 Removing: /var/run/dpdk/spdk_pid102082 00:27:17.697 Removing: /var/run/dpdk/spdk_pid102751 00:27:17.697 Removing: /var/run/dpdk/spdk_pid102782 00:27:17.697 Removing: /var/run/dpdk/spdk_pid102817 00:27:17.697 Removing: /var/run/dpdk/spdk_pid103066 00:27:17.697 Removing: /var/run/dpdk/spdk_pid103096 00:27:17.697 Removing: /var/run/dpdk/spdk_pid103131 00:27:17.697 Removing: /var/run/dpdk/spdk_pid67194 00:27:17.697 Removing: /var/run/dpdk/spdk_pid67346 00:27:17.697 Removing: /var/run/dpdk/spdk_pid67662 00:27:17.697 Removing: /var/run/dpdk/spdk_pid67943 00:27:17.697 Removing: /var/run/dpdk/spdk_pid68126 00:27:17.697 Removing: /var/run/dpdk/spdk_pid68215 00:27:17.697 Removing: /var/run/dpdk/spdk_pid68313 00:27:17.697 Removing: /var/run/dpdk/spdk_pid68405 00:27:17.697 Removing: /var/run/dpdk/spdk_pid68444 00:27:17.697 Removing: /var/run/dpdk/spdk_pid68479 00:27:17.697 Removing: /var/run/dpdk/spdk_pid68542 00:27:17.697 Removing: /var/run/dpdk/spdk_pid68646 00:27:17.697 Removing: /var/run/dpdk/spdk_pid69279 00:27:17.697 Removing: /var/run/dpdk/spdk_pid69343 00:27:17.697 Removing: /var/run/dpdk/spdk_pid69412 00:27:17.697 Removing: /var/run/dpdk/spdk_pid69440 00:27:17.697 Removing: /var/run/dpdk/spdk_pid69520 00:27:17.697 Removing: /var/run/dpdk/spdk_pid69548 00:27:17.697 Removing: /var/run/dpdk/spdk_pid69616 00:27:17.697 Removing: /var/run/dpdk/spdk_pid69650 00:27:17.697 Removing: /var/run/dpdk/spdk_pid69701 00:27:17.697 Removing: /var/run/dpdk/spdk_pid69731 00:27:17.697 Removing: /var/run/dpdk/spdk_pid69777 00:27:17.697 Removing: /var/run/dpdk/spdk_pid69807 00:27:17.697 Removing: /var/run/dpdk/spdk_pid69966 00:27:17.697 Removing: /var/run/dpdk/spdk_pid69996 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70078 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70147 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70166 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70225 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70244 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70278 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70293 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70327 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70341 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70376 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70390 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70424 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70444 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70473 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70492 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70527 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70541 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70575 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70595 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70624 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70638 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70678 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70692 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70721 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70745 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70775 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70789 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70824 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70843 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70872 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70892 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70926 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70940 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70975 00:27:17.697 Removing: /var/run/dpdk/spdk_pid70989 00:27:17.697 Removing: /var/run/dpdk/spdk_pid71023 00:27:17.698 Removing: /var/run/dpdk/spdk_pid71046 00:27:17.698 Removing: /var/run/dpdk/spdk_pid71078 00:27:17.698 Removing: /var/run/dpdk/spdk_pid71100 00:27:17.698 Removing: /var/run/dpdk/spdk_pid71138 00:27:17.698 Removing: /var/run/dpdk/spdk_pid71152 00:27:17.698 Removing: /var/run/dpdk/spdk_pid71186 00:27:17.698 Removing: /var/run/dpdk/spdk_pid71206 00:27:17.698 Removing: /var/run/dpdk/spdk_pid71236 00:27:17.698 Removing: /var/run/dpdk/spdk_pid71313 00:27:17.698 Removing: /var/run/dpdk/spdk_pid71412 00:27:17.698 Removing: /var/run/dpdk/spdk_pid71827 00:27:17.956 Removing: /var/run/dpdk/spdk_pid78712 00:27:17.956 Removing: /var/run/dpdk/spdk_pid79047 00:27:17.956 Removing: /var/run/dpdk/spdk_pid81472 00:27:17.956 Removing: /var/run/dpdk/spdk_pid81860 00:27:17.956 Removing: /var/run/dpdk/spdk_pid82141 00:27:17.956 Removing: /var/run/dpdk/spdk_pid82187 00:27:17.956 Removing: /var/run/dpdk/spdk_pid82492 00:27:17.956 Removing: /var/run/dpdk/spdk_pid82538 00:27:17.956 Removing: /var/run/dpdk/spdk_pid82918 00:27:17.956 Removing: /var/run/dpdk/spdk_pid83470 00:27:17.956 Removing: /var/run/dpdk/spdk_pid83902 00:27:17.956 Removing: /var/run/dpdk/spdk_pid84889 00:27:17.956 Removing: /var/run/dpdk/spdk_pid85834 00:27:17.956 Removing: /var/run/dpdk/spdk_pid85957 00:27:17.956 Removing: /var/run/dpdk/spdk_pid86016 00:27:17.956 Removing: /var/run/dpdk/spdk_pid87492 00:27:17.956 Removing: /var/run/dpdk/spdk_pid87732 00:27:17.956 Removing: /var/run/dpdk/spdk_pid88170 00:27:17.956 Removing: /var/run/dpdk/spdk_pid88280 00:27:17.956 Removing: /var/run/dpdk/spdk_pid88417 00:27:17.956 Removing: /var/run/dpdk/spdk_pid88445 00:27:17.956 Removing: /var/run/dpdk/spdk_pid88477 00:27:17.956 Removing: /var/run/dpdk/spdk_pid88518 00:27:17.956 Removing: /var/run/dpdk/spdk_pid88668 00:27:17.956 Removing: /var/run/dpdk/spdk_pid88807 00:27:17.956 Removing: /var/run/dpdk/spdk_pid89058 00:27:17.956 Removing: /var/run/dpdk/spdk_pid89162 00:27:17.956 Removing: /var/run/dpdk/spdk_pid89584 00:27:17.956 Removing: /var/run/dpdk/spdk_pid89957 00:27:17.956 Removing: /var/run/dpdk/spdk_pid89960 00:27:17.956 Removing: /var/run/dpdk/spdk_pid92222 00:27:17.956 Removing: /var/run/dpdk/spdk_pid92529 00:27:17.956 Removing: /var/run/dpdk/spdk_pid93024 00:27:17.956 Removing: /var/run/dpdk/spdk_pid93031 00:27:17.956 Removing: /var/run/dpdk/spdk_pid93369 00:27:17.956 Removing: /var/run/dpdk/spdk_pid93383 00:27:17.956 Removing: /var/run/dpdk/spdk_pid93401 00:27:17.956 Removing: /var/run/dpdk/spdk_pid93432 00:27:17.956 Removing: /var/run/dpdk/spdk_pid93438 00:27:17.956 Removing: /var/run/dpdk/spdk_pid93588 00:27:17.956 Removing: /var/run/dpdk/spdk_pid93591 00:27:17.956 Removing: /var/run/dpdk/spdk_pid93694 00:27:17.956 Removing: /var/run/dpdk/spdk_pid93700 00:27:17.956 Removing: /var/run/dpdk/spdk_pid93804 00:27:17.956 Removing: /var/run/dpdk/spdk_pid93812 00:27:17.956 Removing: /var/run/dpdk/spdk_pid94304 00:27:17.956 Removing: /var/run/dpdk/spdk_pid94347 00:27:17.956 Removing: /var/run/dpdk/spdk_pid94505 00:27:17.956 Removing: /var/run/dpdk/spdk_pid94627 00:27:17.956 Removing: /var/run/dpdk/spdk_pid95029 00:27:17.956 Removing: /var/run/dpdk/spdk_pid95280 00:27:17.956 Removing: /var/run/dpdk/spdk_pid95769 00:27:17.956 Removing: /var/run/dpdk/spdk_pid96328 00:27:17.956 Removing: /var/run/dpdk/spdk_pid96771 00:27:17.956 Removing: /var/run/dpdk/spdk_pid96841 00:27:17.956 Removing: /var/run/dpdk/spdk_pid96918 00:27:17.956 Removing: /var/run/dpdk/spdk_pid96990 00:27:17.956 Removing: /var/run/dpdk/spdk_pid97122 00:27:17.956 Removing: /var/run/dpdk/spdk_pid97193 00:27:17.956 Removing: /var/run/dpdk/spdk_pid97270 00:27:17.956 Removing: /var/run/dpdk/spdk_pid97341 00:27:17.956 Removing: /var/run/dpdk/spdk_pid97687 00:27:17.956 Removing: /var/run/dpdk/spdk_pid98373 00:27:17.956 Removing: /var/run/dpdk/spdk_pid99738 00:27:17.956 Removing: /var/run/dpdk/spdk_pid99939 00:27:17.956 Clean 00:27:18.215 killing process with pid 61523 00:27:18.215 killing process with pid 61526 00:27:18.215 10:27:37 -- common/autotest_common.sh@1446 -- # return 0 00:27:18.215 10:27:37 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:27:18.215 10:27:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:18.215 10:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:18.215 10:27:37 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:27:18.215 10:27:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:18.215 10:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:18.215 10:27:37 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:18.215 10:27:37 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:18.215 10:27:37 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:18.215 10:27:37 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:27:18.215 10:27:37 -- spdk/autotest.sh@383 -- # hostname 00:27:18.215 10:27:37 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:18.474 geninfo: WARNING: invalid characters removed from testname! 00:27:45.015 10:28:01 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:46.392 10:28:05 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:48.955 10:28:08 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:51.495 10:28:10 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:54.029 10:28:13 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:56.562 10:28:15 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:59.095 10:28:18 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:59.353 10:28:18 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:27:59.353 10:28:18 -- common/autotest_common.sh@1690 -- $ lcov --version 00:27:59.353 10:28:18 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:27:59.353 10:28:18 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:27:59.353 10:28:18 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:27:59.353 10:28:18 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:27:59.353 10:28:18 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:27:59.353 10:28:18 -- scripts/common.sh@335 -- $ IFS=.-: 00:27:59.353 10:28:18 -- scripts/common.sh@335 -- $ read -ra ver1 00:27:59.353 10:28:18 -- scripts/common.sh@336 -- $ IFS=.-: 00:27:59.353 10:28:18 -- scripts/common.sh@336 -- $ read -ra ver2 00:27:59.353 10:28:18 -- scripts/common.sh@337 -- $ local 'op=<' 00:27:59.353 10:28:18 -- scripts/common.sh@339 -- $ ver1_l=2 00:27:59.353 10:28:18 -- scripts/common.sh@340 -- $ ver2_l=1 00:27:59.353 10:28:18 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:27:59.353 10:28:18 -- scripts/common.sh@343 -- $ case "$op" in 00:27:59.353 10:28:18 -- scripts/common.sh@344 -- $ : 1 00:27:59.353 10:28:18 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:27:59.353 10:28:18 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:59.353 10:28:18 -- scripts/common.sh@364 -- $ decimal 1 00:27:59.353 10:28:18 -- scripts/common.sh@352 -- $ local d=1 00:27:59.353 10:28:18 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:27:59.353 10:28:18 -- scripts/common.sh@354 -- $ echo 1 00:27:59.353 10:28:18 -- scripts/common.sh@364 -- $ ver1[v]=1 00:27:59.353 10:28:18 -- scripts/common.sh@365 -- $ decimal 2 00:27:59.353 10:28:18 -- scripts/common.sh@352 -- $ local d=2 00:27:59.353 10:28:18 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:27:59.353 10:28:18 -- scripts/common.sh@354 -- $ echo 2 00:27:59.353 10:28:18 -- scripts/common.sh@365 -- $ ver2[v]=2 00:27:59.353 10:28:18 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:27:59.353 10:28:18 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:27:59.353 10:28:18 -- scripts/common.sh@367 -- $ return 0 00:27:59.353 10:28:18 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:59.353 10:28:18 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:27:59.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.353 --rc genhtml_branch_coverage=1 00:27:59.353 --rc genhtml_function_coverage=1 00:27:59.353 --rc genhtml_legend=1 00:27:59.353 --rc geninfo_all_blocks=1 00:27:59.353 --rc geninfo_unexecuted_blocks=1 00:27:59.353 00:27:59.353 ' 00:27:59.353 10:28:18 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:27:59.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.353 --rc genhtml_branch_coverage=1 00:27:59.353 --rc genhtml_function_coverage=1 00:27:59.353 --rc genhtml_legend=1 00:27:59.353 --rc geninfo_all_blocks=1 00:27:59.353 --rc geninfo_unexecuted_blocks=1 00:27:59.353 00:27:59.353 ' 00:27:59.353 10:28:18 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:27:59.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.354 --rc genhtml_branch_coverage=1 00:27:59.354 --rc genhtml_function_coverage=1 00:27:59.354 --rc genhtml_legend=1 00:27:59.354 --rc geninfo_all_blocks=1 00:27:59.354 --rc geninfo_unexecuted_blocks=1 00:27:59.354 00:27:59.354 ' 00:27:59.354 10:28:18 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:27:59.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:59.354 --rc genhtml_branch_coverage=1 00:27:59.354 --rc genhtml_function_coverage=1 00:27:59.354 --rc genhtml_legend=1 00:27:59.354 --rc geninfo_all_blocks=1 00:27:59.354 --rc geninfo_unexecuted_blocks=1 00:27:59.354 00:27:59.354 ' 00:27:59.354 10:28:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:59.354 10:28:18 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:59.354 10:28:18 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:59.354 10:28:18 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:59.354 10:28:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.354 10:28:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.354 10:28:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.354 10:28:18 -- paths/export.sh@5 -- $ export PATH 00:27:59.354 10:28:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:59.354 10:28:18 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:59.354 10:28:18 -- common/autobuild_common.sh@440 -- $ date +%s 00:27:59.354 10:28:18 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732012098.XXXXXX 00:27:59.354 10:28:18 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732012098.ZB9WVg 00:27:59.354 10:28:18 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:27:59.354 10:28:18 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:27:59.354 10:28:18 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:59.354 10:28:18 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:59.354 10:28:18 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:59.354 10:28:18 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:59.354 10:28:18 -- common/autobuild_common.sh@456 -- $ get_config_params 00:27:59.354 10:28:18 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:27:59.354 10:28:18 -- common/autotest_common.sh@10 -- $ set +x 00:27:59.354 10:28:18 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:59.354 10:28:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:59.354 10:28:18 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:59.354 10:28:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:59.354 10:28:18 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:59.354 10:28:18 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:59.354 10:28:18 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:59.354 10:28:18 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:59.354 10:28:18 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:59.354 10:28:18 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:59.354 10:28:18 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:59.354 + [[ -n 5959 ]] 00:27:59.354 + sudo kill 5959 00:27:59.621 [Pipeline] } 00:27:59.637 [Pipeline] // timeout 00:27:59.643 [Pipeline] } 00:27:59.657 [Pipeline] // stage 00:27:59.663 [Pipeline] } 00:27:59.678 [Pipeline] // catchError 00:27:59.688 [Pipeline] stage 00:27:59.691 [Pipeline] { (Stop VM) 00:27:59.703 [Pipeline] sh 00:27:59.983 + vagrant halt 00:28:03.270 ==> default: Halting domain... 00:28:09.840 [Pipeline] sh 00:28:10.113 + vagrant destroy -f 00:28:14.303 ==> default: Removing domain... 00:28:14.315 [Pipeline] sh 00:28:14.595 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:28:14.604 [Pipeline] } 00:28:14.619 [Pipeline] // stage 00:28:14.624 [Pipeline] } 00:28:14.639 [Pipeline] // dir 00:28:14.644 [Pipeline] } 00:28:14.659 [Pipeline] // wrap 00:28:14.665 [Pipeline] } 00:28:14.678 [Pipeline] // catchError 00:28:14.687 [Pipeline] stage 00:28:14.689 [Pipeline] { (Epilogue) 00:28:14.702 [Pipeline] sh 00:28:14.982 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:21.575 [Pipeline] catchError 00:28:21.577 [Pipeline] { 00:28:21.591 [Pipeline] sh 00:28:21.873 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:22.132 Artifacts sizes are good 00:28:22.142 [Pipeline] } 00:28:22.157 [Pipeline] // catchError 00:28:22.169 [Pipeline] archiveArtifacts 00:28:22.177 Archiving artifacts 00:28:22.297 [Pipeline] cleanWs 00:28:22.308 [WS-CLEANUP] Deleting project workspace... 00:28:22.308 [WS-CLEANUP] Deferred wipeout is used... 00:28:22.315 [WS-CLEANUP] done 00:28:22.317 [Pipeline] } 00:28:22.333 [Pipeline] // stage 00:28:22.339 [Pipeline] } 00:28:22.353 [Pipeline] // node 00:28:22.359 [Pipeline] End of Pipeline 00:28:22.401 Finished: SUCCESS